Thursday, March 29, 2007

Windows as NFS Server

My colleague wanted to install something from a DVD disc on an old Sun Server. Too bad, it is far too old, it only has a CDROM drive. His notebook has a DVD drive, but it is running Windows. So, how can the Solaris 'mount' that drive ?

Not many people know that Microsoft has this piece of software called Windows Services for UNIX (SFU). If you install it, make sure you do a custom installation. Select all the modules and that include the NFS server module. Once that is done, you need to reboot the server.

Read How to: Set Up Server for NFS. It works exactly as any other NFS server, except the command and syntax.

Windows (x.x.x.x):
   nfsshare -o ro anon=yes cdrom=D:
Solaris:
   mount x.x.x.x:cdrom /mnt
Apart from NFS server, SFU offers a platform (Korn shell, C shell) for you to compile (gcc) codes. It also comes with a number of services, such as telnet daemon. IMHO, all systems engineers should install it. The only drawback is that it does not come with an X11 server. For X11 server, I need to rely on Cygwin/X.

Labels: , , , ,

Setup for Remote KVM

My colleagues was asking how I install Solaris on the Sun Fire X4500 using my notebook DVD drive. All new galaxy servers from Sun support Remote KVM. All you have to do is to set the IP for the network management port. Connect your serial cable with the server, execute the following:
cd SP/network
set pendingipaddress=x.x.x.x
set pendingipnetmask=y.y.y.y
set pendingipgateway=x.x.x.z
set commitpending=true

Configure your notebook to be on the same subnet. You can connect galaxy server with you notebook via a cross cable or a Cat-5e (or cat 6) cables directly. Lauch your browser and point the URL to https://x.x.x.x/. Upon successful login, you can go to the Remote tab. You need to ensure you have Java runtime in your notebook 'cos it will launch the Java Web Start. Once you established the connection, you can map your mouse, CDROM drive, Floppy Drive, CDROM image, Floppy image. It is pretty cool.

Labels: , ,

Solaris Express (Build 59), iSCSI vs Samba

While I was exploring ZFS, I stumbled upon the OpenSolaris (Solaris Express) supports iSCSI. So I got hold of another old Netra T1 200 for my testing.

I understand that you need to create volume from the ZFS pool, see zfs(1M) man page for details.

Here are the commands, with 100MB volume created

# cd /zdisk


# for i in z{0,1}{1,2,3,4,5,6,7,8,9} z20
do
mkfile 100m $i
done


# zpool create zpool \
raidz2 /zdisk/z0{1,2,3,4,5,6} \
raidz2 /zdisk/z0{7,8,9} /zdisk/z1{0,1,2} \
raidz2 /zdisk/z1{3,4,5,6,7,8} \
spare /zdisk/z19 /zdisk/z20


:
: iSCIS setup
:
# zfs create zpool/zfs_iscsi
# zfs create -V 100m zpool/zfs_iscsi/v100m
# zfs set shareiscsi=on zpool/zfs_iscsi/v100m


:
: samba setup
:
# zfs create zpool/zfs_samba
# chmod 777 /zpool/zfs_samba/
# cat /etc/sfw/smb.conf
[global]
        netbios name = netra
        server string = Netra T1 200
        security = share
        workgroup = WORKGROUP
        load printers = No
        interfaces = eri0
        bind interfaces only = Yes
        guest account = nobody

[NAS]
        comment = NAS for Windows
        path = /zpool/zfs_samba
        writable = Yes
        printable = No
        browseable = No
        create mode = 0640
        directory mode = 0750
        guest only = Yes
# svcadm enable samba
# iscsitadm list target
Target: zpool/zfs_iscsi/vol100m
    iSCSI Name: iqn.1986-03.com.sun:02:e39429ef-4e77-486d-b48b-8bd8c8f05dfe
    Connections: 0 

In Windows you need to do the following:

  1. Iinstall ISCSI Initiator.
  2. After installation, you will see "iSCSI Initiator" in the Control Panel.
  3. Launch iSCSI Initiator, add "Target Portals" under the "Discovery" tab.
  4. You should be able to see an entry in the Volume/Mount Point/Device under the "Bound Volumes/Devices" tab.
  5. Go the "My Computer" and right-click to launch "Manage"
  6. In the "Computer Management", you can see your Disk under the "Disk Management".
  7. Go ahead to partition and format it.
  8. A Drive with 100MB

It took 10.6s to copy a 20MB from my Dell Latitude D510 notebook to the server using Samba. As for iSCSI, it tooks 2.6s only, 4 times better!!

Labels: , , ,

Saturday, March 24, 2007

Web Site Response Time

I was asked to monitor the response time of one of our managed hosting customers' site. The reason for doing it is to cover somebody's backside in case they were asked "why the site is so slow one ha?" (in Singlish). I can tell you that I hate to do this type of thing, but what can I do....

Anyway, I used one of the Solaris zones and compiled Tcl with tDOM, and httperf. These are the steps that I used:

  1. Download the home page html using Tcl with http package
  2. Parse the html and convert that to a DOM tree with tDOM
  3. Retrieve all the dependencies (image, flash, javascript, css, ...)
  4. Write a temporary file of the html + dependencies, this will be used as the httperf's session workload input file (see -wsesslog option)
  5. Execute httperf with 2 concurrent connections (that's normally configured in web browsers)
  6. Append the data in RRD update format
  7. RRD update format will be cut and paste to my auto graph generator S.T.A.R.

Here is the output graph

The home page consists of 17 thumbnail photos (in jpg format) and other stuff. However, these jpeg files are of size of 225x141 and they are all forced to display in 125x71. If we were to convert these thumbnail photos to the correct size (125x71), we could have saved 364,107 bytes. According to my calculation, the response time of the home page will be reduced by a maximum of 0.93 seconds on a 1.5Mbps internet connection.

The above was put forward to the developer, but I was told the 225x141 size photos were being used in another location. Instead of creating another set of thumbnails for the home page, the thumbnails were forced to display smaller (125x71). Of course it works by wasting a lot of bandwidth and making the home page load slower.

The developer may not know that this can be easily done in Solaris. FYI, ImageMagick is now a built-in utility in Solaris. It is located under the /usr/sfw directory and the package name is SUNWimagick. A script like this can create another set of thumbnails of size 125x71.

PATH=/usr/sfw/bin:$PATH; export PATH
LD_LIBRARY_PATH=/usr/sfw/lib:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH
for i in *jpg
do
new="`basename $i .jpg`_s.jpg"
convert $i -resize 125x71 $new
done
It took 5 seconds on my X4500 to create another set of thumbnails for the home page. With a one time effort of 5 seconds CPU time, you are likely to reduce the home page response time by almost 1 second. The choice is yours. From the above graph, you can see the response time reduced by almost a seconds after 22 March. FYI, the above was proposed to the developer on the 15 March.

Labels: , , , , ,

Friday, March 23, 2007

ZFS on 48 Disks without X4500

For the past few months, I have the opportunity to work with a number of SunFire X4500 (a.k.a Thumper) running the lastest Solaris 10 11/06 with raidz2 and spares implemented in ZFS. After the implementation for the customer, I do not have opportunity to 'play' with it again. Even I have the opporunity to work on it, it will be unwise to try out all the cool stuff from Solaris 10 using customer's production server.

So, how to simulate an environment with 48 disks using an old Sun Netra T1 105. My T1 configuration is:

  • Memory: 256 MB
  • Disk: 2x 18GB (all partitions are mirrored)
  • CPU: 1x 440MHz UltraSPARC-IIi
  • Patch: Recommended and Security Patch, Mar 12 2007, especially for these patches
    • 124204 - zfs memory leak for large file
    • 120068 - vulnerability in telnetd

Make 48 disks (files) with mkfile (1M)

# mkdir /zdisk

# cd /zdisk

# for i in c{0,1,2,3,4,5}t{0,1,2,3,4,5,6,7}d0
do
  mkfile 100m $i
done

# ls /zdisk
c0t0d0  c0t5d0  c1t2d0  c1t7d0  c2t4d0  c3t1d0  c3t6d0  c4t3d0  c5t0d0  c5t5d0
c0t1d0  c0t6d0  c1t3d0  c2t0d0  c2t5d0  c3t2d0  c3t7d0  c4t4d0  c5t1d0  c5t6d0
c0t2d0  c0t7d0  c1t4d0  c2t1d0  c2t6d0  c3t3d0  c4t0d0  c4t5d0  c5t2d0  c5t7d0
c0t3d0  c1t0d0  c1t5d0  c2t2d0  c2t7d0  c3t4d0  c4t1d0  c4t6d0  c5t3d0
c0t4d0  c1t1d0  c1t6d0  c2t3d0  c3t0d0  c3t5d0  c4t2d0  c4t7d0  c5t4d0

Create a RAIDZ2 (double parity) with 7 sets of 6 HDDs RAIDZ2. You can see every RAIDZ2 group cuts across all the controllers, thanks to Joyeur blog

# zpool create zpool \
raidz2 /zdisk/c{0,1,2,3,4,5}t0d0 \
raidz2 /zdisk/c{0,1,2,3,4,5}t1d0 \
raidz2 /zdisk/c{0,1,2,3,4,5}t2d0 \
raidz2 /zdisk/c{0,1,2,3,4,5}t3d0 \
raidz2 /zdisk/c{0,1,2,3,4,5}t4d0 \
raidz2 /zdisk/c{0,1,2,3,4,5}t5d0 \
raidz2 /zdisk/c{0,1,2,3,4,5}t6d0 \
spare  /zdisk/c{0,1,2,3,4,5}t7d0

# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
zpool                  3.91G    288K   3.91G     0%  ONLINE     -

# zpool status
  pool: zpool
 state: ONLINE
 scrub: none requested
config:

        NAME               STATE     READ WRITE CKSUM
        zpool              ONLINE       0     0     0
          raidz2           ONLINE       0     0     0
            /zdisk/c0t0d0  ONLINE       0     0     0
            /zdisk/c1t0d0  ONLINE       0     0     0
            /zdisk/c2t0d0  ONLINE       0     0     0
            /zdisk/c3t0d0  ONLINE       0     0     0
            /zdisk/c4t0d0  ONLINE       0     0     0
            /zdisk/c5t0d0  ONLINE       0     0     0
          raidz2           ONLINE       0     0     0
            /zdisk/c0t1d0  ONLINE       0     0     0
            /zdisk/c1t1d0  ONLINE       0     0     0
            /zdisk/c2t1d0  ONLINE       0     0     0
            /zdisk/c3t1d0  ONLINE       0     0     0
            /zdisk/c4t1d0  ONLINE       0     0     0
            /zdisk/c5t1d0  ONLINE       0     0     0
          raidz2           ONLINE       0     0     0
            /zdisk/c0t2d0  ONLINE       0     0     0
            /zdisk/c1t2d0  ONLINE       0     0     0
            /zdisk/c2t2d0  ONLINE       0     0     0
            /zdisk/c3t2d0  ONLINE       0     0     0
            /zdisk/c4t2d0  ONLINE       0     0     0
            /zdisk/c5t2d0  ONLINE       0     0     0
          raidz2           ONLINE       0     0     0
            /zdisk/c0t3d0  ONLINE       0     0     0
            /zdisk/c1t3d0  ONLINE       0     0     0
            /zdisk/c2t3d0  ONLINE       0     0     0
            /zdisk/c3t3d0  ONLINE       0     0     0
            /zdisk/c4t3d0  ONLINE       0     0     0
            /zdisk/c5t3d0  ONLINE       0     0     0
          raidz2           ONLINE       0     0     0
            /zdisk/c0t4d0  ONLINE       0     0     0
            /zdisk/c1t4d0  ONLINE       0     0     0
            /zdisk/c2t4d0  ONLINE       0     0     0
            /zdisk/c3t4d0  ONLINE       0     0     0
            /zdisk/c4t4d0  ONLINE       0     0     0
            /zdisk/c5t4d0  ONLINE       0     0     0
          raidz2           ONLINE       0     0     0
            /zdisk/c0t5d0  ONLINE       0     0     0
            /zdisk/c1t5d0  ONLINE       0     0     0
            /zdisk/c2t5d0  ONLINE       0     0     0
            /zdisk/c3t5d0  ONLINE       0     0     0
            /zdisk/c4t5d0  ONLINE       0     0     0
            /zdisk/c5t5d0  ONLINE       0     0     0
          raidz2           ONLINE       0     0     0
            /zdisk/c0t6d0  ONLINE       0     0     0
            /zdisk/c1t6d0  ONLINE       0     0     0
            /zdisk/c2t6d0  ONLINE       0     0     0
            /zdisk/c3t6d0  ONLINE       0     0     0
            /zdisk/c4t6d0  ONLINE       0     0     0
            /zdisk/c5t6d0  ONLINE       0     0     0
        spares
          /zdisk/c0t7d0    AVAIL
          /zdisk/c1t7d0    AVAIL
          /zdisk/c2t7d0    AVAIL
          /zdisk/c3t7d0    AVAIL
          /zdisk/c4t7d0    AVAIL
          /zdisk/c5t7d0    AVAIL

errors: No known data errors

Let's go for a test drive with ZFS. First, I will create a zfs file system (zfs1) without compression (by default) and try to simulate a corrupted disk. We then 'scrub' it and 'replace' the corrupted disk with a new disk. You can see the MD5 hash of the file created before the corruption is the same throughout the whole process (before corruption, after corruption, replace faulty disk)

=
# zfs create zpool/zfs1

# dd if=/dev/urandom of=/zpool/zfs1/somefile.bin bs=1024 count=1000
1000+0 records in
1000+0 records out

# digest -a md5 /zpool/zfs1/somefile.bin
c61163bc590222cfbc0576b933b9ba53

# dd if=/dev/zero of=/zdisk/c5t6d0 bs=1024 count=10
10+0 records in
10+0 records out

# zpool scrub zpool

# zpool status
  pool: zpool
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-4J
 scrub: resilver stopped with 0 errors on Fri Mar 23 08:54:20 2007
config:

        NAME                 STATE     READ WRITE CKSUM
        zpool                DEGRADED     0     0     0
          raidz2             ONLINE       0     0     0
            /zdisk/c0t0d0    ONLINE       0     0     0
            /zdisk/c1t0d0    ONLINE       0     0     0
            /zdisk/c2t0d0    ONLINE       0     0     0
            /zdisk/c3t0d0    ONLINE       0     0     0
            /zdisk/c4t0d0    ONLINE       0     0     0
            /zdisk/c5t0d0    ONLINE       0     0     0
          raidz2             ONLINE       0     0     0
            /zdisk/c0t1d0    ONLINE       0     0     0
            /zdisk/c1t1d0    ONLINE       0     0     0
            /zdisk/c2t1d0    ONLINE       0     0     0
            /zdisk/c3t1d0    ONLINE       0     0     0
            /zdisk/c4t1d0    ONLINE       0     0     0
            /zdisk/c5t1d0    ONLINE       0     0     0
          raidz2             ONLINE       0     0     0
            /zdisk/c0t2d0    ONLINE       0     0     0
            /zdisk/c1t2d0    ONLINE       0     0     0
            /zdisk/c2t2d0    ONLINE       0     0     0
            /zdisk/c3t2d0    ONLINE       0     0     0
            /zdisk/c4t2d0    ONLINE       0     0     0
            /zdisk/c5t2d0    ONLINE       0     0     0
          raidz2             ONLINE       0     0     0
            /zdisk/c0t3d0    ONLINE       0     0     0
            /zdisk/c1t3d0    ONLINE       0     0     0
            /zdisk/c2t3d0    ONLINE       0     0     0
            /zdisk/c3t3d0    ONLINE       0     0     0
            /zdisk/c4t3d0    ONLINE       0     0     0
            /zdisk/c5t3d0    ONLINE       0     0     0
          raidz2             ONLINE       0     0     0
            /zdisk/c0t4d0    ONLINE       0     0     0
            /zdisk/c1t4d0    ONLINE       0     0     0
            /zdisk/c2t4d0    ONLINE       0     0     0
            /zdisk/c3t4d0    ONLINE       0     0     0
            /zdisk/c4t4d0    ONLINE       0     0     0
            /zdisk/c5t4d0    ONLINE       0     0     0
          raidz2             ONLINE       0     0     0
            /zdisk/c0t5d0    ONLINE       0     0     0
            /zdisk/c1t5d0    ONLINE       0     0     0
            /zdisk/c2t5d0    ONLINE       0     0     0
            /zdisk/c3t5d0    ONLINE       0     0     0
            /zdisk/c4t5d0    ONLINE       0     0     0
            /zdisk/c5t5d0    ONLINE       0     0     0
          raidz2             DEGRADED     0     0     0
            /zdisk/c0t6d0    ONLINE       0     0     0
            /zdisk/c1t6d0    ONLINE       0     0     0
            /zdisk/c2t6d0    ONLINE       0     0     0
            /zdisk/c3t6d0    ONLINE       0     0     0
            /zdisk/c4t6d0    ONLINE       0     0     0
            spare            DEGRADED     0     0     0
              /zdisk/c5t6d0  UNAVAIL      0     0     0  corrupted data
              /zdisk/c0t7d0  ONLINE       0     0     0
        spares
          /zdisk/c0t7d0      INUSE     currently in use
          /zdisk/c1t7d0      AVAIL
          /zdisk/c2t7d0      AVAIL
          /zdisk/c3t7d0      AVAIL
          /zdisk/c4t7d0      AVAIL
          /zdisk/c5t7d0      AVAIL

errors: No known data errors

# digest -a md5 /zpool/zfs1/somefile.bin
c61163bc590222cfbc0576b933b9ba53

# mkfile 100m /zdisk/newdisk

# zpool replace zpool /zdisk/c5t6d0  /zdisk/newdisk

# zpool status
  pool: zpool
 state: DEGRADED
 scrub: resilver completed with 0 errors on Fri Mar 23 08:57:26 2007
config:

        NAME                    STATE     READ WRITE CKSUM
        zpool                   DEGRADED     0     0     0
          raidz2                ONLINE       0     0     0
            /zdisk/c0t0d0       ONLINE       0     0     0
            /zdisk/c1t0d0       ONLINE       0     0     0
            /zdisk/c2t0d0       ONLINE       0     0     0
            /zdisk/c3t0d0       ONLINE       0     0     0
            /zdisk/c4t0d0       ONLINE       0     0     0
            /zdisk/c5t0d0       ONLINE       0     0     0
          raidz2                ONLINE       0     0     0
            /zdisk/c0t1d0       ONLINE       0     0     0
            /zdisk/c1t1d0       ONLINE       0     0     0
            /zdisk/c2t1d0       ONLINE       0     0     0
            /zdisk/c3t1d0       ONLINE       0     0     0
            /zdisk/c4t1d0       ONLINE       0     0     0
            /zdisk/c5t1d0       ONLINE       0     0     0
          raidz2                ONLINE       0     0     0
            /zdisk/c0t2d0       ONLINE       0     0     0
            /zdisk/c1t2d0       ONLINE       0     0     0
            /zdisk/c2t2d0       ONLINE       0     0     0
            /zdisk/c3t2d0       ONLINE       0     0     0
            /zdisk/c4t2d0       ONLINE       0     0     0
            /zdisk/c5t2d0       ONLINE       0     0     0
          raidz2                ONLINE       0     0     0
            /zdisk/c0t3d0       ONLINE       0     0     0
            /zdisk/c1t3d0       ONLINE       0     0     0
            /zdisk/c2t3d0       ONLINE       0     0     0
            /zdisk/c3t3d0       ONLINE       0     0     0
            /zdisk/c4t3d0       ONLINE       0     0     0
            /zdisk/c5t3d0       ONLINE       0     0     0
          raidz2                ONLINE       0     0     0
            /zdisk/c0t4d0       ONLINE       0     0     0
            /zdisk/c1t4d0       ONLINE       0     0     0
            /zdisk/c2t4d0       ONLINE       0     0     0
            /zdisk/c3t4d0       ONLINE       0     0     0
            /zdisk/c4t4d0       ONLINE       0     0     0
            /zdisk/c5t4d0       ONLINE       0     0     0
          raidz2                ONLINE       0     0     0
            /zdisk/c0t5d0       ONLINE       0     0     0
            /zdisk/c1t5d0       ONLINE       0     0     0
            /zdisk/c2t5d0       ONLINE       0     0     0
            /zdisk/c3t5d0       ONLINE       0     0     0
            /zdisk/c4t5d0       ONLINE       0     0     0
            /zdisk/c5t5d0       ONLINE       0     0     0
          raidz2                DEGRADED     0     0     0
            /zdisk/c0t6d0       ONLINE       0     0     0
            /zdisk/c1t6d0       ONLINE       0     0     0
            /zdisk/c2t6d0       ONLINE       0     0     0
            /zdisk/c3t6d0       ONLINE       0     0     0
            /zdisk/c4t6d0       ONLINE       0     0     0
            spare               DEGRADED     0     0     0
              replacing         DEGRADED     0     0     0
                /zdisk/c5t6d0   UNAVAIL      0     0     0  corrupted data
                /zdisk/newdisk  ONLINE       0     0     0
              /zdisk/c0t7d0     ONLINE       0     0     0
        spares
          /zdisk/c0t7d0         INUSE     currently in use
          /zdisk/c1t7d0         AVAIL
          /zdisk/c2t7d0         AVAIL
          /zdisk/c3t7d0         AVAIL
          /zdisk/c4t7d0         AVAIL
          /zdisk/c5t7d0         AVAIL

errors: No known data errors

# zpool status
  pool: zpool
 state: ONLINE
 scrub: resilver completed with 0 errors on Fri Mar 23 08:57:26 2007
config:

        NAME                STATE     READ WRITE CKSUM
        zpool               ONLINE       0     0     0
          raidz2            ONLINE       0     0     0
            /zdisk/c0t0d0   ONLINE       0     0     0
            /zdisk/c1t0d0   ONLINE       0     0     0
            /zdisk/c2t0d0   ONLINE       0     0     0
            /zdisk/c3t0d0   ONLINE       0     0     0
            /zdisk/c4t0d0   ONLINE       0     0     0
            /zdisk/c5t0d0   ONLINE       0     0     0
          raidz2            ONLINE       0     0     0
            /zdisk/c0t1d0   ONLINE       0     0     0
            /zdisk/c1t1d0   ONLINE       0     0     0
            /zdisk/c2t1d0   ONLINE       0     0     0
            /zdisk/c3t1d0   ONLINE       0     0     0
            /zdisk/c4t1d0   ONLINE       0     0     0
            /zdisk/c5t1d0   ONLINE       0     0     0
          raidz2            ONLINE       0     0     0
            /zdisk/c0t2d0   ONLINE       0     0     0
            /zdisk/c1t2d0   ONLINE       0     0     0
            /zdisk/c2t2d0   ONLINE       0     0     0
            /zdisk/c3t2d0   ONLINE       0     0     0
            /zdisk/c4t2d0   ONLINE       0     0     0
            /zdisk/c5t2d0   ONLINE       0     0     0
          raidz2            ONLINE       0     0     0
            /zdisk/c0t3d0   ONLINE       0     0     0
            /zdisk/c1t3d0   ONLINE       0     0     0
            /zdisk/c2t3d0   ONLINE       0     0     0
            /zdisk/c3t3d0   ONLINE       0     0     0
            /zdisk/c4t3d0   ONLINE       0     0     0
            /zdisk/c5t3d0   ONLINE       0     0     0
          raidz2            ONLINE       0     0     0
            /zdisk/c0t4d0   ONLINE       0     0     0
            /zdisk/c1t4d0   ONLINE       0     0     0
            /zdisk/c2t4d0   ONLINE       0     0     0
            /zdisk/c3t4d0   ONLINE       0     0     0
            /zdisk/c4t4d0   ONLINE       0     0     0
            /zdisk/c5t4d0   ONLINE       0     0     0
          raidz2            ONLINE       0     0     0
            /zdisk/c0t5d0   ONLINE       0     0     0
            /zdisk/c1t5d0   ONLINE       0     0     0
            /zdisk/c2t5d0   ONLINE       0     0     0
            /zdisk/c3t5d0   ONLINE       0     0     0
            /zdisk/c4t5d0   ONLINE       0     0     0
            /zdisk/c5t5d0   ONLINE       0     0     0
          raidz2            ONLINE       0     0     0
            /zdisk/c0t6d0   ONLINE       0     0     0
            /zdisk/c1t6d0   ONLINE       0     0     0
            /zdisk/c2t6d0   ONLINE       0     0     0
            /zdisk/c3t6d0   ONLINE       0     0     0
            /zdisk/c4t6d0   ONLINE       0     0     0
            /zdisk/newdisk  ONLINE       0     0     0
        spares
          /zdisk/c0t7d0     AVAIL
          /zdisk/c1t7d0     AVAIL
          /zdisk/c2t7d0     AVAIL
          /zdisk/c3t7d0     AVAIL
          /zdisk/c4t7d0     AVAIL
          /zdisk/c5t7d0     AVAIL

errors: No known data errors

# digest -a md5 /zpool/zfs1/somefile.bin
c61163bc590222cfbc0576b933b9ba53

Now I am going to create another ZFS file system with compression on. You can see the time taken to create such a big file (1GB) in a compressed file system is only 51.112 seconds vs 49.419 seconds without compression. Also the MD5 hash of the same file under the 2 file systems are the same.

# time dd if=/dev/urandom of=/zpool/zfs1/bifile.bin bs=1024 count=100000
100000+0 records in
100000+0 records out

real    0m49.419s
user    0m0.701s
sys     0m41.169s

# zfs get compression zpool/zfs1
NAME             PROPERTY       VALUE                      SOURCE
zpool/zfs1       compression    off                        local

# zfs create zpool/zfs2

# zfs set compression=on zpool/zfs2

# time dd if=/dev/urandom of=/zpool/zfs2/bifile.bin bs=1024 count=100000
100000+0 records in
100000+0 records out

real    0m52.112s
user    0m0.697s
sys     0m40.897s

# cp /zpool/zfs1/bifile.bin /zpool/zfs2/bifile.bin-copy-zfs1

# digest -a md5 /zpool/zfs1/bifile.bin
b15a3f71dd6ffb937c9cbf508cb442ff

# digest -a md5 /zpool/zfs2/bifile.bin-copy-zfs1
b15a3f71dd6ffb937c9cbf508cb442ff

Solaris 10 rocks, ZFS on Solaris 10 rocks++.

PS. I also explored the IP filter on the T1 so that I can implement host-based firewall for my customer. The article, Using Solaris IP Filters, is a very good starting point. It is pretty easy to implement and I tried that out with Samba.

Labels: ,

Thursday, March 08, 2007

One-Liner

My colleague was asking me whether it is possible to "tail -2000 bigfile.txt > bigfile.txt" without creating an empty file. As you may already know if you literally run the above command, you will get an empty file. So the question is, is it possible to do that with one-liner. Also, is it possible to do it safely without using intermediate temporary file ?

IMO, you may be able to copy some bytes over and overwrite the original file instead of getting an empty file. It will depend on how much intermediate bytes the stream can store in the buffer. This example uses subshell to proof the point, but final line count will definitely going to be different from what you expect. (cat bigfile.txt;sleep 1) | dd of=bigfile.txt. You cannot achieve this by using file redirection because the shell will open a new file before the execution of the tail command.

If we really want a failsafe one-liner, I will have to use temp file

n=2000;f=bigile.txt;t=.$f.$$;tail -$n $f > $t && mv -f $t $f

If you need to do this very often, it is wise to put in an extact effort to either crank out a shell script or a function. Below is a shell function that you can run it like this keep 2000 bigfile.txt. As you can see, I actually ensure all the arguments are ok to avoid the shell function from throwing execption. Also, if the line count of a file is less than the required lines to be kept, I simply do not proceed with the commands. I used a lot of short-circuit syntax to avoid all the if-then statement.

keep()
{
 [ $# -eq 2 -a $1 -eq $1 -a -r $2 ] >/dev/null 2>&1 && \
 ( n=$1; f=$2; t=.$f.$$; [ $n -le `wc -l $f | awk '{print $1}'` ] && tail -$n $f > $t && mv -f $t $f )
}
Just run "keep 2000 bigfile.txt"

Happy one-liner!

Labels: ,

Wednesday, March 07, 2007

Solaris + Tcl + Apache + RRDtool = ??

It is S.T.A.R and it shines!

My colleague wants to visualise the trend of some sampling data. Back in my mind there is this voice keep telling me to crank out a demo to prove how all these wonderful opensource can do. So, I came up with a web-form to take in the raw sampling data and plot them out dynamically. Here are the stuff that I used for this little toy:

  • Solaris 10 and we run everything from a spare root zone
  • Tcl (Tool Command Language), my favourite scripting langauge. My friends used to call that "Talk Cock Language".
  • Apache 1.3.x (comes with Solaris 10)
  • RRDtool, a very powerful data logging and graphing toolkit. Blindings for Perl, Python, Ruby and Tcl are built-in in the source tarball.

Enclosed some screen dump:

The Tcl cgi script:

package require cgi package require Rrd cgi_input set ds [cgi_import ds] set step [cgi_import step] set type [cgi_import type] set title [cgi_import title] set ylabel [cgi_import ylabel] set data [string trim [cgi_import data]] # find out the start time (t0) and end time (t2) set t1 [lindex [split [lindex $data 0] :] 0] set t2 [lindex [split [lindex $data end] :] 0] set ndata [llength [split $data "\n"]] # set start [expr $t1 - 10] set end [expr $t2 + 10] set rrdfile "/tmp/[file tail $argv0]-[pid].rrd" set heartbeat [expr $step * 2] set DS {} foreach i [split $ds { ,}] { set var [string trim $i] append DS "DS:$var:$type:$heartbeat:U:U " } eval Rrd::create $rrdfile --step $step --start $start \ $DS RRA:AVERAGE:0.5:1:$ndata foreach d [split $data "\n"] { Rrd::update $rrdfile $d } set DEFS {} foreach i [split $ds { ,}] { set var [string trim $i] append DEFS "DEF:$i=$rrdfile:$i:AVERAGE " } set LINES {} set color { ff0000 00ff00 0000ff ffff00 ff00ff 00ffff 880000 008800 000088 888800 880088 008888 } set ncolor [llength $color] set cnt 0 foreach i [split $ds { ,}] { set c [lindex $color [expr $cnt % $ncolor]] set var [string trim $i] append LINES "LINE:$i#$c:$i " incr cnt } puts "Content-type: image/png\n" eval Rrd::graph - --imgformat PNG --star $t1 --end $t2 \ --title \"$title\" --vertical-label \"$ylabel\" \ $DEFS $LINES file delete -force $rrdfile

Labels: , , ,

Monday, March 05, 2007

Which process listens to this port (in Solaris)

If you have 'lsof' installed, it is very easy to find out which process is listening at certain port. However, in Solaris, it may not be so obvious. This is the question my friend asked me and I wrote a short script (listen.sh) for him. Basically it runs 'pfile' over all the processes listed in /proc (except process 0, which is the sched)

#! /bin/sh

if [ $# -ne 1 ]; then
  echo "Usage: $0 "
  exit 1
fi
listen=$1


PATH=/usr/bin:/bin
export PATH


#
# skip process 0
#
cd /proc
for i in [1-9][0-9]*
do
  pfiles $i | nawk -v listen=$listen '
    BEGIN {
      found=0
    }
    NR==1 {
      process=$0
    }
    /sockname/ && $NF == listen {
      getline
      if ( ! /peername/ ) {
        found=1
        exit
      }
    }
    END {
      if ( found == 1 ) {
        printf("%s\n",process)
      }
    }'
done

# ./listen.sh 22
29626: /usr/lib/ssh/sshd
#

Labels: