Wednesday, December 14, 2011

NFS in Solaris 11


Changes in This Release : Solaris 11

The following enhancements are included in the Oracle Solaris 11 release:
  • The configuration parameters that used to be set by editing the/etc/default/autofs and /etc/default/nfs can now be set in the SMF repository.
  • The NFS service provides support for mirror mounts. Mirror mounts enable an NFSv4 client to traverse shared file system mount points in the server namespace. For NFSv4 mounts, the automounter will perform a mount of the server namespace root and rely on mirror mounts to access its file systems. The main advantage that mirror mounts offer over the traditional automounter is that mounting a file system using mirror mounts does not require the overhead associated with administering automount maps. Mirror mounts provide these features:
    • Namespace changes are immediately visible to all clients.
    • New shared file systems are discovered instantly and mounted automatically.
    • File systems unmount automatically after a designated inactivity period.
  • NFS referrals have been added to the NFS service. Referrals are server-based redirections that an NFSv4 client can follow to find a file system. The NFS server supports referrals created by the nfsref(1M)command, and the NFSv4 client will follow them to mount the file system from the actual location. This facility can be used to replace many uses of the automounter, with creation of referrals replacing the editing of automounter map. NFS referrals provide these features:
    • All of the features of mirror mounts listed above
    • Automounter-like functionality without any dependence on the automounter.
    • No setup required at either the client or server.
  • The ability to mount the per-DNS-domain root of a Federated File System name space has been added. This mount point can be used with NFS referrals to bridge from one file server to another, building an arbitrarily large namespace. 
  • The sharectl utility is included. This utility enables you to configure and manage file sharing protocols, such as NFS. For example, this utility allows you to set client and server operational properties, display property values for a specific protocol, and obtain the status of a protocol. 
  • The NFS version 4 domain can be defined. 

Significant Changes in Earlier Releases : Solaris 10

The Solaris 10 11/06 release provides support for a file system monitoring tool. See the following:
Additionally, this guide provides a more detailed description of the nfsmapiddaemon. For information about nfsmapid, see the following:
Starting in the Solaris 10 release, NFS version 4 is the default. For information about features in NFS version 4 and other changes, refer to the following:
Additionally, the NFS service is managed by the Service Management Facility.
  •  Administrative actions on this service, such as enabling, disabling, or restarting, can be performed by using the svcadm command. 
  • The service's status can be queried by using the svcs command.

solaris10 zone to solaris10 brand in s11 recovery

there is interesting thread on zones-discuss  the situation

  • start with s10u8 zone
  • zoneadm attach -u to s10u9 hosts
  • v2v to s11 express as solaris10 brand zone
  • update  the host to solaris 11 from solaris 11 express
  • run /usr/lib/brand/shared/dsconvert
  • # zoneadm -z sandpit boot (failed)
    zone 'sandpit': WARNING: vnic3:1: no matching subnet found in netmasks(4): 172.25.48.101; using default of 255.255.0.0.
    zone 'sandpit': Error: The installed version of Solaris 10 is not supported.
    zone 'sandpit': SPARC systems require patch 142909-17
    zone 'sandpit': x86/x64 systems require patch 142910-17
    zone 'sandpit': exec /usr/lib/brand/solaris10/s10_boot sandpit /zoneRoot/sandpit failed
    zone 'sandpit': ERROR: unable to unmount /zoneRoot/sandpit/root.
  • to recover
1. Reboot to the Solaris 11 Express BE

   root@global# beadm activate <s11express-be-name>
   root@global# init 6

2. Partially revert the work done by dsconvert

   In this example, the zone's zonepath is /zones/s10.

   root@global# zfs list -r /zones/s10
   rpool/zones/s10                    3.18G  11.3G    51K  /zones/s10
   rpool/zones/s10/rpool              3.18G  11.3G    31K  /rpool
   rpool/zones/s10/rpool/ROOT         3.18G  11.3G    31K  legacy
   rpool/zones/s10/rpool/ROOT/zbe-0   3.18G  11.3G  3.18G  /
   rpool/zones/s10/rpool/export         62K  11.3G    31K  /export
   rpool/zones/s10/rpool/export/home    31K  11.3G    31K  /export/home

   The goal here is to move rpool/zones/s10/rpool/ROOT up one level.  We
   need to do a bit of a dance to get it there.  Do not reboot or issue
   'zfs mount -a' in the middle of this.  If something goes wrong and a
   reboot happens, it won't be disasterous - you will just need to
   complete the procedure when the next boot stops with
   svc:/filesystem/local problems.

   root@global# zfs set mountpoint=legacy rpool/zones/s10/rpool/ROOT/zbe-0
   root@global# zfs set zoned=off rpool/zones/s10/rpool
   root@global# zfs rename rpool/zones/s10/rpool/ROOT/zbe-0 \
      rpool/zones/s10/ROOT
   root@global# zfs set zoned=on rpool/zones/s10/rpool
   root@global# zfs set zoned=on rpool/zones/s10/ROOT

   Now the zone's dataset layout should look like:

   root@global# zfs list -r /zones/s10
   NAME                                USED  AVAIL  REFER  MOUNTPOINT
   rpool/zones/s10                    3.19G  11.3G    51K  /zones/s10
   rpool/zones/s10/ROOT               3.19G  11.3G    31K  legacy
   rpool/zones/s10/ROOT/zbe-0         3.19G  11.3G  3.19G  legacy
   rpool/zones/s10/rpool                93K  11.3G    31K  /rpool
   rpool/zones/s10/rpool/export         62K  11.3G    31K  /export
   rpool/zones/s10/rpool/export/home    31K  11.3G    31K  /export/home

3. Boot the zone and patch

   root@global# zoneadm -z s10 boot
   root@global# zlogin s10
   root@s10# ...  (apply required patches)
  • 119254/119255 rev 75 (patch utils)
  • u9 kernale patch 14299/142910-17(SPARC/x86) 


4. Shutdown the zone root@s10# init 0 5. Revert the dataset layout to the way that dsconvert left it. Again, try to avoid reboots during this step. root@global# zfs set zoned=off rpool/zones/s10/ROOT root@global# zfs set zoned=off rpool/zones/s10/rpool root@global# zfs rename rpool/zones/s10/ROOT rpool/zones/s10/rpool/ROOT root@global# zfs set zoned=on rpool/zones/s10/rpool root@global# zfs inherit zoned rpool/zones/s10/rpool/ROOT 6. Reboot to Solaris 11 root@global# beadm activate <solaris11-be-name> root@global# init 6 At this point, the zone should be bootable on Solaris 11.


Observations
  • since this is zoneadm attach -u from u8 to u9, only min update so it is not really a u9 zone
  • one should really do zoneadm attach -U from u8 to u9
  • RTFM, to support SVR4 pkg and patching one need to install 119254-75 (SPARC), 119534-24, and 140914-02 or 119255-75, 119535-24 and 140915-02 (x86/x64), or later versions in solaris 10 before created archive

zoneadm attach -U

s10u9 introduce zoneadm attach -U option in addition to the  -u option
s10u8  with patch 142910-17/142909-17 ( the update 9 Kernel patch, SPARC/x86 ), once that is installed -U is available, install latest rev of 119254/11955 first, if patching though, it's the patchutils patch
man page

attach [-u | -U] [-b patchid]... [-F] [-n path] [brand-specific options]
The attach subcommand takes a zone that has been detached from one system and attaches the zone onto a new system. Therefore, it is advised (though not required) that the detach subcommand should be run before the “attach” takes place. Once you have the new zone in the configured state, use the attach subcommand to set up the zone root instead of installing the zone as a new zone.
For native zones, zoneadm checks package and patch levels on the machine to which the zone is to be attached. If the packages/patches that the zone depends on from the global zone are different (have different revision numbers) from the dependent packages/patches on the source machine, zoneadm reports these conflicts and does not perform the attach. If the destination system has only newer dependent packages/patches (higher revision numbers) than those on the source system, you can use the -u or -U options. The -u option updates the minimum number of packages within the attached zone to match the higher-revision packages and patches that exist on the new system. The -U option updates all packages in the attached zone that are also installed in the global zone. With -u or -U, as in the default behavior, zoneadm does not perform an attach if outdated packages/patches are found on the target system.
For native zones, one or more -b options can be used to specify a patch ID for a patch installed in the zone. These patches will be backed out before the zone is attached or, if -u was also specified, updated.
The -F option can be used to force the zone into the “installed” state with no validation. This option should be used with care since it can leave the zone in an unsupportable state if it was moved from a source system to a target system that is unable to properly host the zone. The -n option can be used to perform a “dry run” of the attach subcommand. It uses the output of the “detach -n” subcommand as input and is useful to identify any conflicting issues, such as the network device being incompatible, and can also determine whether the host is capable of supporting the zone. The path can be “-”, to read the input from standard input.


It seems that to get to u9 from u8  with all the patches and update one should use -U option
one should not use -F at all
In any case always have a backup copy of the zone so one can always restore the zone to its original state



Wednesday, December 7, 2011

what's new in solaris 11

this link list the what's new
  •  installation
    • Automated Installer 
      • installation framework for automated system provisioning
      • network installation
      • manifest
        • system configuration
        • SW pkg
        • zone
      • bootable image
    • Jumpstart migration utility js2ai
    • interactive Text installations
      • server configuration
      • automatic or manual network configuration
      • no GUI desktop
      • audio or wireless drivers
    • Live Media Installation (x86)
      • automatic network configuration
      • full GUI desktop
      • GNU partition Edition
    • Distribution Constructor
      • CML tool for building pre-configured bootable customized s11 installation image
      • use manifest description
        • target disk
        • SW pkg
        • basic system configuration
        • gold image
  • packaging
    • Image Packing System (IPS0
      • framework for complete SW lifecycle mgmt
        • installation
        • upgrade
        • remove
      • integrated with ZFS
        • safe upgrade with ZFS clone FS
      • network based package repositores
        • with full automatic dependency checking
          • any SW that is required is sutomatically installed or update
      • boot to different boot env
      • can lock down individua pkg
      • fast boot feature
        • on by default in x86
        • off by default in SPARC
    • support SVR4 pkg
      • no legacy patching tool
  • System configuration
    • SMF
    • Name service
      • nscfg
      • /etc/nsswitch.conf   svc:/system/name-service/switch
      • /etc/resolv.conf      svc:/network/dns/client
      • /etc/nodename         svc:/idenitty:node
      • /etc/defaultdomain svc:/system/identity:domain
      • /etc/default/init      svc:/system/environment:init
      • /etc/driver/drv/driver.conf
    • sysconfig
      • replace sys-unconfig, sysidtool
      • unconfiguring
      • reconfiuring
    • SMF, FMA
      • SNMP trap
      • SMTP notification
      • ASR
  • v12
    • zones are easier to create and manage
    • solaris10 zone
      • p2v
      • v2v
      • zonep2vchk
      • NFS server in zone
      • exclusive-IP zones by default
        • anet for exclusive-IP zone
      • administer network flow within NGZ
        • bandwidth
        • priority control based on IP address, subnet tramsport protocols and port
        • flowadm
        • flowstat
      • Delegated Administration
        • admin zone based on RBAC
      • zone boot env
        • ZFS boot env: ZBE
        • beadm inside zone
      • improved zones dataset layout
        • NGZ mimic GZ
        • NGZ support different ZFS dataset
      • immutable zones
        • read-only root for zones
        • Mandatory Write Access Control (MWAC
      • cleanly shutdown zones
        • zoneadm -z <z> shutdown
      • zonestat 
        • observation of system resources
        • memory, CPU, resource control limit
        • exclusive-IP: network device utilization on data-links, vlink and zones
        • libzonestat
          • svc:/system/zones-monitoring.default
      • tecla CLi editing library for zonecfg
        • emacs mode :default
        • vi mode
        • tecla(5)
  • Security
    • Role Authentication
      • root is a role by default
      • 1st user account is assigned root role
      • user assume root role
        • user  or role passwd
    • Trusted Platform Module (TPM)
      • TPM chip is a HW device on MB
      • protected storage
      • protected capabilities on an inexpensive components with restricted resource
      • s11 provide drivers 
        • TCG 12 spec
        • TSS SW to provide cryptographic openationd on secre device and adm toll for manageing the YPM and PKCS11 provider
    • labeled Ipsec
      • trusted extension
    • IPsec support AES FMAC Cryptographic Algorithm
      • data integrity of AES Galoris/Counter Mode (AES GCM) but without acturally encrypting the data
    • Kerberos Dtrace Providers
      • RFC4120
    • Trusted Extensions Enhancements
      • enables per-label and pe-user credentials to request a unique passwd for each label
      • tncfg :
        • create, modify and display networking properties
        • label network packets received from remote hosts
      • set security lables on ZFS dataset
    • Support ssh X.509 Certificate Extension
    • Solaris Cryptographic Framework
      • NSA Suite B algorithms
      • T4 support AES CFG mode used by table space encryption of oracle DB advanced Securiy option
      • support Intel Advanced Encrytion Stnadards (AES-NI0
      • Oracle key managemeny system now be used for AES key storage using the new pkcs11 kms plugin
    • In-kernel pfexec ZForced and Basic Privileges
  • Nwtrorking
    • re-architecture to unify, simplify and enhance observation and interoperability of NIC
      • GLDv3 driver framework
        • VLAN
        • link aggregation
        • MAC layer for Ethernet, Wi-Fi and IB
        • dladm
    • Network v12n and resource mgmt
      • V12N
        • VNIC
        • vswitch
        • VLANs
        • routing
        • firewall
        • tight integration with zone exclusive-ip
      • Resource Mgmt
        • QoS
          • bandwidth limits
          • CPU limit
          • interrupt-driven to polling
    • Manual and Automatic Networking
      • network profile svc:/network/physical:default
        • switch between automatic and manual networking by enabling Automatic or DefaultFixed profile through netadm and netcfg
      • Live Media install (LiveCD) use Automatic networking, useful for laptop
    • Default Names for Datalinks
      • net0, net1 etc
      • can be reverted
    • Changing MAC Address with dladm
      • persistent across reboots
    • IB Enabled and Optimized
      • improved support for Sockets Direct Protocol (SDP)
        • support RDMA; zero-copy data transfer
        • netstat, truss, pfiles mdb kmdb
        • NGZ for exclusive-IP and Shared-IP)
      • RDSv3 for Oracle RAC
    • Registration of VLANs
      • ability for broadcasting VLAN ID
      • VNIC support
    • Link Layer Discovery Protocol Support (LLDP)
      • one-way link layer protocol that allow an IEEE802 LAN station to advertise the capabilities and current status of the system 
      • lldpadm: enable/disable LLDP agent on physical datalink
    • New Sockets Architecture
      • no longer use STREAMS
      • significant performance improvements
      • simplified developer interface for new socket types
    • Load Balancing
      • Integrated L3/L4 LB
      • stateless DSR and NAT modes
      • CLI
      • configuration API
    • Link Protection
      • prevent guest VM sending harmful packets to network
      • basic threats: IP, DHCP, MAC, L2 fram spoofing
      • use ipf for inbound filtering and customizable  filter rules
    • Bridging and Tunneling
      • Bridging
        • Spanning Tree Protocol (STP, IEEE 802.ID-1998)
        • TRILL protocol
      • Tunneling
        • iptun
        • wireshark
        • snoop
    • IP observability
      • wireshark: packet sniffing tool and snoop
      • dlstat: runtme statistics for data link
    • IP Multipathing(IPMP)
      • re-architecture
      • ipadm
      • Transitive probe: new failure detection mode
        • without aditioning test IP address
        • svccfg -a svc:/network/ipmp setprop config/transitive-probing=true
        • svcadm refresh svc:/network/ipmp:default
      • in.mpathd
        • managed by SMF service svc:/network/ipmp
    • I/O Enhancements to netcat
    • new FTP server
      •  proftpd replace  WU-ftpd
    • Dtrace Networking Provider
      • tcp
      • udp
      • ipv4/IPv6
  • Storage
    • ZFS  is root FS
    • easy upgrade with IPS
    • ZFS data Encrytion
    • ZFS deduplication :(need  RAM, L2ARC with SSD)
    • ZFS Shadow Migration (local or NFS FS0
    • ZFS backup with NDMP with ZFS send/receive
    • Temporary ZFS mountpoint
    • ZFS snapshot Alias with zfs snap (snapshot)
    • Recursive ZFS send (dataset and descendents)
    • ZFS snapshot Diff
    • NFSv4 Client and Server Migration Support
    • SMB for Micosoft interoperability
    • Dtrace Storage Provider
      • SMB
      • iscsi
    • COMSTART SCSI target Frameworks
      • SCSI device type: disk, tape with FC
      • iSCSI Extensions for RDMA (iSER)
      • SCSI RDMA Protocol (SRP) for IB HCA
      • iSCSI
      • Fibre Channel over Ethernet (FCoE)
      • Dtrace Provider:
        • SCSI Target Mode Framework (STMF)
        • SCSI Block Device (SBD)
  • Kernel/Platform Support
    • SPARC T4
      • 2GB page size
      • ISA cryptographic HW optimization
      • CPU and DRAM performance counter support
      • L3 cache support
      • 20%-40% gain for various ciper and hash instruction
      • gain for SSL and direct cryptographic acceleration for DB 11.2.0.2
      • Critical Threads
        • dynamic allocation of HW resource to provide boots in performance
        • matching a thread's HW requirements with the amount of exclusive access to specific HW resources
    • Single-root I/O v12n (SR-IOV)
      •  extension to PCIe to allow efficient sharing of PCIe devices among VMs both in HW and SW
    • NUMA I/O
      • allow kernel threads, interrupts and memory to be placed on physical resources according to the physical topology pf the machines
      • specific high-level affinity requirements of I/O frameworks, actual load, resource control and power mgmt policies
    • Intel Advanced Vector Extensions(AVX)
      • new instructions vector floating point operations
        • image, video, audio processing, 3D modeling, scientific simulation and financial analytics
      • Sandy Bridge and beyond
    • Dynamic Intimate Shared Memory (DISM) performance Improvements
      • for large memory system 8x oracle DB start up improvement for ISM and DISM creation, locking, destruction
    • Suspend and resume to RAM
    • Improved HW supported
      • FMA
      • generic topology enumeration 
      • generic hotplug framework
      • latest Intel microprocessor
      • Intel's Latency TOP and Dtrace to measured latency
    • Dtrace cpc Provider
      • cycles executed
      • instructions executed
      • cache missed
      • TLB misses
  • user Environment
    • 850 open source pkg in IPS
      • Java SE 6, 7
      • GCC 4.5.2
      • Python 2.7
      • Perl 5.1.2
      • Ruby 1.8.7
      • PHP 5.2.17
      • complete web stack
    • Desktop env
      • GNOME 2.30.3
      • Firefox 6
      • Thunderbird 6
    • GNU
      • in /usr/bin
      • in /usr/gnu/bin
    • Default shell:
      • user:  bash
      • system: ksh93
    • Removable Media
      • HAL
      • D-Bus messaging passing system
    • new sound system
    • search for content in MAN pages
      • man -K searchstring
    • Virtual Console Terminals
      • svc:/system/vtdaemon:defaul
      • svc:/system/console-login:vt*
      • Alt-Ctrl-F#
    • Time Slider Snapshot Mgmt
      • use home
      • Gui
    • Common UNIX Printing System (CUPS) printing
      • Lp wrap CUPS functionality
    • libc  Familiarity
      • improve familiarity with linux and BSD
    • paths.h Path Name Definitions
      • /usr/include/paths.h
      • /usr/include/sys/paths.h
    • locale and languages (200+)
    • TrueType Fonts

solaris branded zone in solaris 11

Solaris branded zone is the default zone in Solaris 11
  • whole-root type only
  • immutable (read-only zone root) zone with  file-mac-profile (mandatory acccess control)
    • none: standard read-write
    • strict: read-only FS, no exceptions, only logged remotely
    • fixed-configuration: permits updates to /var except systome configuration
    • flexible-configuration:permit change
        • /etc
        • root home directory
        • /var
    • zonecfg add dataset: read-only dataset
    • zonecfg add fs, can mount read-only FS
  • IPS packing
  • install, detach, attach and P2V
  • NGZ root is a ZFS dataset
  • use boot env: beadm
  • All enabled IPS pkg repositories must be accessible while installing a zone
  • zone SW is minimized
  • default exclusive-IP with Automatic NET (anet) VNIC
  • support
    • ZFS encryption
    • Network V12n and QoS
    • SMB and NFS
    • can be NFS server
  • not supported
    • DHCP address assignment in a shared-IP zone
    • ndmpd
    • SMB server
    • SSL proxy server
    • ZFS pool administration through zpool cmd
  • zonestat: report CPU, memeory resource control, network bandwidth for exclusive-ip zone
  • admin resource
    • user
    • auths
      • solari.zone.login
      • solaris.zone.manage
      • solaris.zone.clonefrom
  • resources pool association
    • dedicated-cpu
      • ncpus
      • importance
    • capped-cpu
    • capped-memory
      • physical
      • swap
      • locked
  • zone network interface
    • shared-IP
      •  shared a network interface with GZ
      • use ipadm
      • net resource properties
        • address
        • physical
    • exclusive-IP
      • must have dedicated network interface
      • anet resource, a dedicated VNIC is automatically created and assigned to zone
      • can use pre-configured VNIC
      • default
      • support
        • DHCP v4 and v6
        • IP filter
        • IPMP
        • IP routing
        • ipadm for setting  TCP/UDP/SCTP and IP/ARP
        • IPsec and IKE
        • snoop
        • dtadm
        • sysconfig
  • hostid
  • disk format: uscsi
  • devices: /dev in zone
  • zone-wide resource
    • zone.cpu-cap
    • zone.cpu-shares
    • zone.max-locked-memeory
    • zone.max-lofi
    • zone.max-lwps
    • zne.max-msg-ids
    • zpne.max-processes
    • zone.max-sem-ids
    • zone.mac-shm-ids
    • zone.max-shm-memory
    • zone.max-swap
  • use attr for comment

solaris10 branded zone in solaris 11

Due to change in package system (SRV4 to IPS)  there is no direct upgrade from S10 to S11  one can use
  • P2Vconverting s10 physical system to solaris10 branded zone in s11
  • V2V converting s10 native full root zone  to solaris10 branded zone in s11
The following list some limitations of solaris10 zone in s11 from solaris10(5)
  • s10u9 or earily with patch 142909-17(SPARC) or 142910-17(x86) or later
  • 32bit and 64-bit solaris 10 apps
  • zone must reside on its own zfs dataset
  • delegated ZFS dataset configuration is currently experimental and has not yet been tested
  • para-v xvm domain is experimental and know problem for 64-bit apps
  • /dev/sound device can not be conigured
  • file-mac-profile property used to create read-only zones is not available
  • quota(1M0 to retrieve UFS FS info is not available
  • the following ndd parms are not available
    • ip_squeue_fanout
    • ip_soft_rings_cnt
    • ip_ire_pathmtu_interval
    • tcp_mdt_max_pbufs
  • Networking features that are different
    • Mobile IP is not available in s11
    • /dev/net ;VNIC are not supported by libdlpi in s11 but support by libslpi(3LIB) in s10
    • IPMP output are not the same

  • mdb and dtrace are not fully functional when used in global zone to examine processes executing within solaris10 zone
  • zonep2vchk (can be copied for s11 to s10) is used to generate info needed for P2V
  • solaris10 zone does not supported statically linked binaries
  • to support SVR4 pkg and patching one need to install 119254-75 (SPARC), 119534-24, and 140914-02 or 119255-75, 119535-24 and 140915-02 (x86/x64), or later versions in solaris 10 before created archive
  • in s11 pkg:/system/zones/brand/solaris10 must be installed
  • zonecfg: use create -t SYSsolaris10 or set brand=solaris10
  • can set hostid
  • support sysidcfg
  • support migration between two s11 hosts

Tuesday, December 6, 2011

OSC4.0 what's new

the release note , we just summary the what's new part here
  • Support Solaris 11 (for s10 use OSC3.3.1)
  • IPS package format
  • Automated Installer Support
  • run only in a GZ and in a zone cluster. A zone cluster is now configured with the solaris brand NGZ with cluster attr. The solaris and solaris10 brands of NGZ are supported for configuration with the HA for Oracle Solaris Zones data service.
  • Support for Apache, Apache Tomcat, DHCP, DNS, NFS,  Oracle Database 11.2.0.3 (single instance and Oracle  RAC) and WebLogic Server. 
  • DR (geo-cluster) support for replication solutions such as StorageTek
    Availability Suite 4.0, Oracle Data Guard and a script-based plug-in.
  • New IPS pkg name for SPARC and x86
    • Previous Cluster Package Cluster New IPS Package Name
      SUNWscapc                          ha-cluster/data-service/apache
      SUNWscdhc                          ha-cluster/data-service/dhcp
      SUNWscdns                          ha-cluster/data-service/dns
      SUNWsczone                        ha-cluster/data-service/ha-zones
      SUNWscnfs                           ha-cluster/data-service/nfs
    • SUNWscor                            ha-cluster/data-service/oracle-database
      SUNWsctomcat                     ha-cluster/data-service/tomcat
      SUNWscwls                         ha-cluster/data-service/weblogic
      SUNWscdsbuilder                ha-cluster/developer/agent-builder
      SUNWscdev                          ha-cluster/developer/api
      SUNWscderby                       ha-cluster/ha-service/derby
      SUNWscgds                           ha-cluster/ha-service/gds
      SUNWscrtlh                           ha-cluster/ha-service/logical-hostname
      SUNWscsmf                           ha-cluster/ha-service/smf-proxy
      SUNWsctelemetry                  ha-cluster/ha-service/telemetry
      SUNWsccacao                       ha-cluster/library/cacao
      SUNWscucm                          ha-cluster/library/ucmm
      SUNWesc, SUNWfsc, SUNWjsc,
      SUNWcsc
                                                        ha-cluster/locale
      SUNWscnmr, SUNWscnmu ha-cluster/release/name
      SUNWscmasar, SUNWscmasazu,
      SUNWscmautil, SUNWscmautilr
                                                       ha-cluster/service/management
      SUNWscmasasen                    ha-cluster/service/management/slm
      SUNWscqsr, SUNWscqsu       ha-cluster/service/quorum-server
      SUNWscqsman                          ha-cluster/service/quorum-server/manual
      SUNWjscqsu, SUNWcscqsu     ha-cluster/service/quorum-server/locale
      SUNWjscqsman                    ha-cluster/service/quorum-server/manual/locale
      SUNWmdmr, SUNWmdmu               ha-cluster/storage/svm-mediator
      SUNWscsckr, SUNWscscku             ha-cluster/system/cfgchk
      SUNWsc, SUNWscu, SUNWscr,
      SUNWsczr, SUNWsczu,
      SUNWsccomu, SUNWsccomzu
                                                          ha-cluster/system/core
      SUNWscmasa, SUNWscmasau   ha-cluster/system/dsconfig-wizard
      SUNWscman                               ha-cluster/system/manual
    • SUNWscdsman                    ha-cluster/system/manual/data-services
      SUNWjscman                      ha-cluster/system/manual/locale
  • (SPARCOnly) SUNWscxvm ha-cluster/data-service/ha-ldom
    • TABLE 3 NewIPS PackageNames forGeographic Edition
      Previous Geographic Edition Package Name New IPS Package Name
      SUNWscgctl, SUNWscgctlr,
      SUNWscghb, SUNWscghbr
                                                                ha-cluster/geo/framework
      SUNWscgrepavs, SUNWscgrepavsu   ha-cluster/geo/replication/availability-suite
      SUNWscgrepodg, SUNWscgrepodgu ha-cluster/geo/replication/data-guard
      SUNWscgrepsbpu                                 ha-cluster/geo/replication/sbp
      SUNWscgman                                        ha-cluster/geo/manual
  • What's Not Included in the Oracle Solaris Cluster 4.0 Software
    The following features were included in the Oracle Solaris Cluster 3.3 release but are not included in the Oracle Solaris Cluster 4.0 release:
    ■ Support for Veritas File System (VxFS) and Veritas VolumeManager (VxVM)
    ■ Support for the VxVM cluster feature for Oracle RAC in addition to VxVM with Oracle
    Solaris Cluster
    ■ GUI and GUI wizards
    ■ Support for SunManagement Center
    ■ Support for SunQFS from Oracle
    ■ Support for non-global zones as resource-group node-list targets
    ■ Support for Oracle Solaris IP Security Architecture (IPsec)
    ■ Support for Oracle Solaris Trusted Extensions
    ■ The scsnapshot tool
    ■ The cconsole utility (the Oracle Solaris pconsole utility can be used instead)
    ■ Storage-based replication:
    ■ Support for EMC Symmetrix RemoteData Facility (SRDF)
    ■ Support for Hitachi True Copy andHitachi Universal Replicator storage-based
    replication
    ■ Three-data-center (3DC) configuration
    The following HA data services are not initially available with the 4.0 release but might become available at a later time:
    ■ Afga IMPAX
    ■ ASE
    ■ Informix
    ■ Kerberos
    ■ MySQL
    ■ Oracle Business Intelligence Enterprise Edition
    ■ Oracle eBusiness Suite
    ■ Oracle GlassFishMessage Queue
    ■ Oracle iPlanet Web Server
    ■ PeopleSoft Enterprise
    ■ PostgreSQL
    ■ Samba
    ■ SAP
    ■ SAP liveCache
    ■ SAP Web Application Server
    ■ Siebel, SWIFTAlliance Access and Gateway
    What's Not Included in the Oracle Solaris Cluster 4.0 Software
    Oracle Solaris Cluster 4.0 Release Notes 13
    ■ Sybase
    ■ TimesTen
    ■ WebSphereMessage Broker
    ■ WebSphereMessage Queue
    The Grid Engine and Sun Java System Application Server EE (formerly calledHADB) data services have been removed from the Oracle Solaris Cluster software.

oracle DB/RAC 11.2.0.3


Oracle DB/RAc 11.2.0.3
11/10/11: Patch Set 11.2.0.3 for Linux, Solaris, Windows, AIX and HP-UX Itanium is now available on support.oracle.com. Note: it is a full installation (you do not need to download 11.2.0.1 first). See the README for more info (login to My Oracle Support required).

For solaris 11 you need 11.2.0.3

osc4.0 docs and download

dec/4 2011 oracle Solaris 4 is out

download
http://www.oracle.com/technetwork/server-storage/solaris-cluster/downloads/index.html

docs links
Oracle Solaris Cluster

How-To Guides
 How-To Install and Configure a Two-Node Cluster (Oracle Solaris 11)
This article provides a step-by-step process for quickly and easily installing and configuring Oracle Solaris Cluster software for two nodes, including the configuration of a quorum device.
 How-To Create a Failover Zone in a Cluster (Oracle Solaris 11)
This how-to-guide describes how to quickly and easily configure an Oracle Solaris Zone in failover mode using the Oracle Solaris Cluster High Availability (HA) agent for Oracle Solaris Zones, which supports both Oracle Solaris 10 and 11 Zones.

Oracle Solaris Cluster 4 Webcast Dec 06 2011 12noon EST webast


Oracle Solaris Cluster 4.0 Launch Webcast  - Join the webcast on Tuesday, 12/6/11 at 9am PT.

Register Today and learn about Oracle Solaris Cluster 4.0, the first release providing high availability (HA) and disaster recovery (DR) capabilities for Oracle Solaris 11, the first cloud OS. Bill Nesheim, VP, Oracle Solaris Platform Engineering, will present how Oracle Solaris Cluster extends Oracle Solaris to provide the HA and DR infrastructure required for deploying mission critical workloads , in private, public and hybrid clouds  deployments as well as enterprise data centers.

Register Now!

oracle solaris Summit at LISA 2011 (Dec/06/2011)

Dec 06 2011 the live stream of Solaris Day @LISA2011
http://psav.mediasite.com/mediasite/Viewer/?peid=b31d4d7b75fe4fcc8b3798a42d3d6b711d


Agenda
8:00 a.m. Registration
9:00 a.m. Oracle Solaris 11 Strategy
Markus Flierl, VP Software Development, Oracle
9:30 a.m. Next Generation OS Lifecycle Management with Oracle Solaris 11
Dave Miner, Principal Software Engineer, Oracle
Bart Smaalders, Principal Software Engineer, Oracle
11:00 a.m. Data Management with ZFS
Mark Maybee, Principal Software Engineer, Oracle
12:00 noon Lunch
1:00 p.m. Oracle Solaris Virtualization and Oracle Solaris Networking
Mike Gerdts, Senior Software Engineer, Oracle
Sebastian Roy, Software Engineer, Oracle
2:30 p.m. Security in your Oracle Solaris Cloud Environment
Glen Faden, Sr. Principal Software Engineer, Oracle
3:15 p.m. Break
3:30 p.m. Oracle Solaris – The Best Platform to run your Oracle Applications
David Brean, Principal Software Engineer, Oracle
4:15 p.m. Oracle Solaris Cluster – HA in the Cloud
Gia-Khanh Nguyen, Principal Software Engineer, Oracle
5:00 p.m. Networking Reception sponsored by Oracle Solaris Cluster


oracle DB and RAC and v12N/partition and Solaris 11

this link list the Supported V12N and partitioning tech for Oracle DB and RAC
I just want to highlight the solaris part as of 12/06/2011
  • The Solaris 11 will support oracle 11.2.0.3 and above
  • Solaris zone will work with oracle clusterware only with DB and RAC



Platform Virtualization Technology Operating System Certified Oracle Single Instance Database Releases Certified Oracle RAC Database  Releases
Oracle Solaris Sparc Dynamic Domain Solaris 10
  • 10gR2
  • 11gR1 Note
  • 11gR2
  • 10gR2 
  • 11gR1 
  • 11gR2 
Solaris 9
  • 10gR2
  • 11gR1
  • 10gR2
  • 11gR1
Oracle VM Server for SPARC Solaris 10
Solaris 11
  • 11gR2 (11.2.0.3 and above)
  • 11gR2 (11.2.0.3 and above)
Oracle Solaris Containers Solaris 10
Solaris 11
  • 11gR2 (11.2.0.3 and above)
  • 11gR2 (11.2.0.3 and above)
Oracle Solaris 8 Branded Zone
Oracle Solaris 9 Branded Zone
Solaris 10
  • N/A
Oracle Solaris x86-64 Oracle Solaris Containers Solaris 10
Solaris 11
  • 11gR2 (11.2.0.3 and above)
  • 11gR2 (11.2.0.3 and above)
Oracle VM Server for x86-64 Solaris 10
Solaris 11
  • 11gR2 (11.2.0.3 and above)
  • 11gR2 (11.2.0.3 and above)

Oracle Solaris Sparc Notes


  • For 11gR1, please apply Oracle patch 8799617 and OS patch 138888-07 on Solaris 10 10/08 (Update6) or later.
  • Oracle supports the single instance database in "Oracle Solaris 8 Containers" & "Oracle Solaris 9 Containers" (also known as Solaris 8 Branded Zones & Solaris 9 Branded Zones) on a host running Solaris 10. Supported versions are Solaris 8 Containers 1.0.1 and Solaris 9 Containers 1.0.1 running on Solaris 10 Update 10/8 and later. Please check the documentation for the appropriate Oracle Database version to ensure the corresponding Solaris versions are supported. See Documents for further information:
  • Using Oracle RAC on Oracle Solaris Containers within an Oracle VM Server for SPARC (LDoms) is not supported. Oracle Single Instance database on Oracle Solaris Containers within an Oracle VM Server for SPARC (LDoms) is supported
  • 11gR1 (11.1.0.7) Solaris 10 Logical Domains on Sparc 64-bit requires patch 7535429
  • Oracle Solaris Logical Domains are supported with Oracle RAC 10gR2, 11gR1 and 11gR2 with Solaris version 10 Update 8 or later with Oracle Solaris Cluster 3.2 1/09 and later versions of 3.2 and Oracle Solaris Cluster 3.3
  • Oracle Solaris Logical Domains are supported with Oracle RAC 11gR2 with Solaris version 10 Update 6 or later (patches 142900-12, 141870-03) with Oracle Clusterware 11.2
  • Please reference My Oracle Support note 317257.1 for best practices document for deploying Oracle Single Instance database in a Solaris Container.
  • Please reference the RAC/Container Best Practices document for deploying Oracle RAC in Solaris Containers.
  • Oracle Solaris Containers are supported with Oracle RAC 9iR2 (9.2.0.5 and above), 10gR2 and 11gR1 (with Oracle Solaris Cluster on SPARC). Solaris version 10 Update 7 or later (patches 141444-09, 143055-01, 142900-06, 143137-04 "md patch") with Oracle Solaris Cluster 3.3 and 3.2u2 patched to 126106-39 or later.
  • Oracle Solaris Containers are supported with Oracle RAC 11.2.0.2 with patch 12419331 (for Oracle Solaris Cluster on SPARC). Solaris 10 9/10 (Update 9) or later (patch 142909-17) with Oracle Solaris Cluster 3.3.
  • Oracle Solaris Containers are supported with Oracle RAC 10gR2, 11gR1 and 11gR2 with Oracle Clusterware on SPARC. Solaris version 10 Update 8 or later (patches 142900-14, 143055-01) with
    • Oracle Clusterware 10.2 (patch 9352164)
    • Oracle Clusterware 11.1 (patch 9207257, 9352179)
    • Oracle Clusterware 11.2.0.1 (patch 11840629)
    • Oracle Clusterware 11.2.0.2 (patch 12419353)


Oracle Solaris x86-64 Notes

  • Please reference My Oracle Support note 317257.1 for best practices document for deploying Oracle Single Instance database in a Solaris Container.
  • Please reference the RAC/Container Best Practices document for deploying Oracle RAC in Solaris Containers.
  • Oracle Solaris Containers are supported with Oracle RAC 10gR2 and 11gR2 with Oracle Clusterware on Solaris x86-64. Solaris 10 10/09 (Update 8) or later (patches 142901-15, 142934-02) with
    • Oracle Clusterware 10.2 (patch 7172531)
    • Oracle Clusterware 11.2.0.1 (patch 11840629)
    • Oracle Clusterware 11.2.0.2 (patch 12419353)
  • Oracle Solaris Containers are supported with Oracle RAC 10gR2 (patch 9654991) and 11gR2 (11.2.0.2 with patch 12419331) with Oracle Solaris Cluster on Solaris x86-64. Solaris 10 9/10 (Update 9) or later (patch 142910-17) with Oracle Solaris Cluster 3.3.
  • Please check the documentation for the appropriate Oracle Database version to ensure the corresponding Solaris versions are supported. See the following document for further information:

Monday, December 5, 2011

oracle solaris cluster 33 pricing

In light of dec/4/2011 OSC 4.0 announcement, I just want to keep a record on the OSC3.3.1 pricing
There is only one OSC Enterprise Edition per processor pricing
  • 2 years:    $1050
  • 3 years:    $1500
  • 4 years:    $1800
  • 5 years:    $2100
  • Perpetual:$3000
1st years support: $660


Thursday, December 1, 2011

Fujitsu/LSI 16 core SPARC64-IXfx

The register detail the new FJ SPARC64-IXfx chip and PrimeHPC FX10
and cpu-world  provide some detail

SPARC64-IXfx

16 core
  • each core : 
    • 32KB L1 D$, 32KB L1 I$
    • two INT IU
    • two address calculaion unit,
    • four FP unit FMA  allow fat SIMD span two FP unit (8 flops/core)
    • a Storage Unit SU (Ld/Store)
  • 12MB L2$
  • integrated  memory controller/DDR3
    • 64GB
    • Bandwidth 86GB/sec
  • designed by FJ with LSI
  • Fabbed by TSMC @40nm
  • 21.9mm x22.1mm
  • 110W
  • 1.85Ghz@128flops=236 gflops
  • 4 Tofu Interconnect interface
    • handles collective operations
    • Tofu router: 10 Tofu links
    • 6D mesh/torus
    • PCI_E2 controller
    • 65nm@312.5Mhz
    • 10 bi-diectional orts@5GB/sec peak of 100GB/sec switching capacity
    •  
PRIMEHPC FX10 SW
  • 4 SPARC64-IXfs per blade
  • 4 Tofu interconnect chips per blade
  • cooled with water blocks attached to rear door water jackets
  • base PrimeHPC FX10 
    • 64 racks@$650k
    • 6144 process
    • 385TB RAM
    • 384 I/O nodes
    • 1536 expansion slots
    • 1.5 pflops peak
    • 1.4 MegaW
  • Fujitsu Exabyte File System/variant of Lustre FS

Wednesday, November 23, 2011

top three china HPC systm in china hpc top100 and topc500 2011

the Top three sites  from china hpc top100 2011 and top500 2011

Tianhe-1A:天河一号
  • top500 (2nd)
    • NUDT YH MPP, 7168 x2 xeon x5670 6C 2.93Ghz, 7168 NIVIDIA M2050
    • Total core= 7168x(2x6+14)=186368 (agree with top500 list)
    • Rpeak=7168x(2x6x2.93x4+14x32x1.15)=4701061 (top500=4701000)
  • top100(1st)
    • NUDT YH MPP, 7168 x2 xeon x5670 6C 2.93Ghz, 7168 NIVIDIA M2050 and 2048 FT-1000, 8C and 1GZ (it was list as Hex Core, a misprint)
    • Total core= 7168x(2x6+14)+ 2048x8=202750 (agree with top100)
    • Rpeak=470100, mean that it does not include FT-1000 in Rpeak and linpack
    • It seems that to highlight china own FT-1000 chip it need to be included in top100
    • we all know that it is not easy to run linpack in a mix-CPU environment so it make sense not to include FT-1000 in linpack
Nebulae:曙光星云
  • top500 (4th)
    • Dawning TC6300 Blade, 4640 x2 xeon x5650 6C 2.66Ghz, 4640 NIVIDIA C2050
    • Total core= 4640x(2x6+14)=120640 (agree with top500 list)
    • Rpeak=4640x(2x6x2.66x4+14x32x1.15)=2982964 (top500=2984300)
  • top100(4th)
    • Dawning TC6300 Blade,2560 x2 xeon x5650 6C 2.66Ghz, 2560 NIVIDIA C2050,but for calculation to match the reported total core cout the total blade is only 2016
    • Total core= 2016x(2x6+14)=52416(agree with top100 list)
    • Rpeak=2016x(2x6x2.66x4+14x32x1.15)=1296046 (top100=1296320)
  • so between top100 and top500 report Nebulae system almost double in size
    • if one look at the Top500 site Nebulae system did not change  between 06/2010 and 11/2011
    • One just wondering why  Nebulae NEED to report few system for China HPC100

SUNWAY Blue light:神威蓝光
  • top500 (14th)
    • SUNWAY bluelight MPP shenwei SW1600 16 core 975Mhz
    • Total core= 8575x(16)=137200 (agree with top500 list)
    • Rpeak=8575x(16x.975x8)=1070160 (agree with top500 list)
  • top100(2nd)
    • SUNWAY bluelight MPP shenwei SW1600 16 core 975Mhz
    • Total core= 8575x(16)=137200 (agree with top100 list)
    • Rpeak=8575x(16x.975x8)=1070160 (agree with top100 list)
  • SW1600 is 16 core CPU that use 5x5 crossbar to connect 4 (4core) group and IO system that support PCI-E and ge NIC on chip and mgmt channel
  • each 4 core group support DDR3/1333 memory channel/bank
  • PCI-E support 8x5 gbps
  • each system bd has two CPU, it is not know how these two CPU connected
  • 1U system has 4 system bd, I ASSUME that each system bd has its own QDR connection
  • SUNWAY use  324 port and 256 port IB switch
    • 324 is 3 layer COS network of 3x9 of 36 port chip
    • 256 is two layer fat tree, I could not understand the network diagram
  • Due to the high density of CPU in 1U, system use water cooling
  • overall  very impressive system with only 9 rack
May be by next year all these systems will use China's own chip
  • godson
  • shenwei
To replace these Xeon chip and NIVIDIA with GODSON or SHENWEI will cost  money , but  Today China does not have shortage of MONEY




Thursday, November 17, 2011

china hpc in top500 2011

some observations on the recent top500 2011 list from china

  • 74 entry in top500
  • most are xeon cpu, some amd and shenwei sw1600
  • FT-1000 is list in china top100 but not in top500
  • there is no GODSON, loongson cpu
IMHO, these list donot provide the whole picture of the system
  • no detail list of the cpu spec
  • no detail of the server arch
    • hdd
    • ram
    • pci slots
    • io chip
  • it is about time for top500 to provide more detail info or require the org to provide more detail info






Wednesday, November 9, 2011

Oracle Solaris 11 launch

Today Oracle announce the Solaris 11 ,1st Cloud OS
It could/should also take the name Solaris 12c
press release highlight some keep features
the download page for s11 (11/11/2011)
Oracle Solaris 11 page that include how to and whitpaper
Solaris 11 documents

One can watch the webcast later
There will be 100 city Solaris 11 tour
These are the some interesting points during the launch
  • There is no change in open source policy for Oracle Solaris
  • S10 will be supported for a long time, T5 and M4 come out next year will support s10
  • S11 zone as NFS server feature will not be backport to s10
  • Solaris is looking at Ksplice technology
  • Oracle Solaris does not have any plan for KVM
  • Ops-center for S11 will come out next year
  • Oracle Solaris Cluster will support s11 (I did not see any docs or announcement)
  • SPARC super cluster will be GA in Dec/2011, SPARC will run S11 and Cell will run OEL, Oracle Solaris Cluster is optional addon
  • Exalogic run Solaris 11 will support zone
  • There will be many White paper/How To on S11
  • There is no tool for in place "Defragment" of zpool for s10 , if one send/receive zpool it will get "defragement", S11 handle this better
  • There is no in place upgrade from s10 to s11, but one can move s10 to s10 zone in s11; p2v or v2v
some pictures