Thursday, December 27, 2012

Prepending number of lines to standard output using nl

nl copies files to standard output, with lines added. It is very flexible as it can prepend numbers to non blank lines and even justify left or right.

Usage 1: To add lines to standard output
$ nl -ba /etc/hosts.allow 

     1  #
     2  # hosts.allow   This file describes the names of the hosts which are
     3  #               allowed to use the local INET services, as decided
     4  #               by the '/usr/sbin/tcpd' server.
     5  #
     6
where -ba => prepend numbers to all lines

Usage 2: To add lines to non-blank text
$ nl -bt /etc/hosts.allow
     
     1  #
     2  # hosts.allow   This file describes the names of the hosts which are
     3  #               allowed to use the local INET services, as decided
     4  #               by the '/usr/sbin/tcpd' server.
     5  #
where -bt => prepend lines to non-blank lines

Usage 3: Format the numbering to Left Justify
$ nl - bt -nln /etc/hosts.allow

1       #
2       # hosts.allow   This file describes the names of the hosts which are
3       #               allowed to use the local INET services, as decided
4       #               by the '/usr/sbin/tcpd' server.
5       #
where -bt => prepend lines to non-blank lines
-nln => format the number to be left-justify

Usage 4: Format the numbering to Right Justify
 $ nl -bt -nrn /etc/hosts.allow

     1  #
     2  # hosts.allow   This file describes the names of the hosts which are
     3  #               allowed to use the local INET services, as decided
     4  #               by the '/usr/sbin/tcpd' server.
     5  #

Wednesday, December 26, 2012

Switching between Ethernet and Infiniband using Virtual Protocol Interconnect (VPI)

This short writeup is a summary of the article Switching between Ethernet and Infiniband using Virtual Protocol Interconnect (VPI). Of course you will need to use the QSA Adapter (QSFP+ to SFP+ adapter) which is the world's first solution for the QSFP to SFP+ conversion challenge for 40GB/Infiniband to 10G/1G. For more information, see Quad to Serial Small Form Factor Pluggable (QSA) Adapter to allow for the hardware


For the full article, see Switching between Ethernet and Infiniband using Virtual Protocol Interconnect (VPI)

Overview
mlx4 is the low level driver implementation for the ConnectX adapters designed by Mellanox Technologies. The ConnectX can operate as an InfiniBand adapter, as an Ethernet NIC, or as a Fibre Channel HBA. The driver in OFED 1.4 supports Infiniband and Ethernet NIC configurations. To accommodate the supported configurations, the driver is split into three modules:
  1. mlx4_core
    Handles low-level functions like device initialization and firmware commands processing. Also controls resource allocation so that the InfiniBand and Ethernet functions can share the device without interfering with each other.
  2. mlx4_ib
    Handles InfiniBand-specific functions and plugs into the InfiniBand midlayer
  3. mlx4_en
    A new 10G driver named mlx4_en was added to drivers/net/mlx4. It handles Ethernet specific functions and plugs into the netdev mid-layer.
Using Virtual Protocol Interconnect (VPI) to switch between Ethernet and Infiniband
Loading Drivers
  1. The VPI driver is a combination of the Mellanox ConnectX HCA Ethernet and Infiniband drivers. It supplies the user with the ability to run Infiniband and Ethernet protocols on the same HCA.
  2. Check the MLX4 Driver is loaded, ensure that the
    # vim /etc/infiniband/openib.conf
    # Load MLX4_EN module
    MLX4_EN_LOAD=yes
  3. If the MLX4_EN_LOAD=no, the Ethernet Driver can be loaded by running
    # /sbin/modprobe mlx4_en
Port Management / Driver Switching
  1. Show Port Configuration
    # /sbin/connectx_port_config -s
    --------------------------------
    Port configuration for PCI device: 0000:16:00.0 is:
    eth
    eth
    --------------------------------
  2. Looking at saved configuration
    # vim /etc/infiniband/connectx.conf
  3. Switching between Ethernet and Infiniband
    # /sbin/connectx_port_config
  4. Configuration supported by VPI
    - The following configurations are supported by VPI:
     Port1 = eth   Port2 = eth
     Port1 = ib    Port2 = ib
     Port1 = auto  Port2 = auto
     Port1 = ib    Port2 = eth
     Port1 = ib    Port2 = auto
     Port1 = auto  Port2 = eth
    
      Note: the following options are not supported:
     Port1 = eth   Port2 = ib
     Port1 = eth   Port2 = auto
     Port1 = auto  Port2 = ib
For more information, see
  1. ConnectX -3 VPI Single and Dual QSFP+ Port Adapter Card User Manual (pdf)
  2. Open Fabrics Enterprise Distribution (OFED) ConnectX driver (mlx4) in OFED 1.4 Release Notes

Friday, December 21, 2012

Quad to Serial Small Form Factor Pluggable (QSA) Adapter


Quad to Serial Small Form Factor Pluggable (QSA) Adapter designed by Mellanox Technologies is the world’s first solution for the QSFP to SFP+ conversion challenge.

The QSA enables smooth, cost-effective, connections between Virtual Protocol Interconnect® (VPI) or 40 Gigabit Ethernet adapters using contemporary QSFP ports and 1 or 10 Gigabit Ethernet networks using existing SFP or SFP+ based cabling. Similarly Ethernet switches with 40Gb/s QSFP ports can connect to servers with 10Gb/s Ethernet NIC ports using QSA.

For more information, see Quad to Serial Small Form Factor Pluggable (QSA) Adapter from Mellanox

Thursday, December 20, 2012

Using getent to query /etc/nsswitch.conf

getent program is useful for querying information setup on the /etc/nsswitch.conf. Some of the usage includes

Example 1: To get the user1 entry at /etc/passwd, you will do something like 
# getent passwd user1
user1:x:604:100:User 1:/home/user1:/bin/bash

Example 2: To get the hosts entry at /etc/hosts, you will do something like
# getent hosts node1
192.168.1.5     node1.private.mycluster.com node1

Example 3: To get the groups at /etc/group, you will do something like
# getent group gaussian
gaussian:x:501:user1,user2

For more information, see getent Linux manual pages

Wednesday, December 19, 2012

Information for configuring Microsoft HPC Server

If you are looking to configure the Microsoft Server HPC Server 2008 R2. You may want to take a look at the resources here

  1. Windows HPC Server 2008 R2  - Step by Step (pdf) (Resource Kit)
  2. Windows HPC Server 2008 R2

Tuesday, December 18, 2012

Commercial Solution for Check-pointing by Smart Suspend

If you are looking for a commercial checkpoint solution here executing jobs can be reliably suspended and resumed at will, you may want to take a look at Smart Suspend by Jaryba. According to the website of Smart Suspend Features

Jaryba SmartSuspend (SSR) is a grid workload management solution that enables executing jobs to be reliably suspended and resumed at will. While suspended, the job's hardware (CPU and memory) and license resources are released, making those resources available to other jobs. As a user space technology, SSR achieves this without any modification to the underlying operating system (OS) or the applications under management. Licenses, memory and CPU are cleanly reacquired when a job is resumed.

For more information on how SmartSuspend Works see
  1.  How SmartSuspend Suspend Works
  2. Suspension Examples
  3. Using SmartSuspend

Monday, December 17, 2012

QUEST 1.3.0 and forrtl severe (173) error

If  you are running codes that uses QUEST 1.3.0 which are compiled with Intel XE, you may encounted the error

forrtl: severe (173): 
A pointer passed to DEALLOCATE points to an array that cannot be deallocated

Do note that QUEST used to work with Intel's ifort, but Intel has tightened their standard of memory allocation/deallocation hence the error you see. It is recommended you use gfrotran for the compilation

Wednesday, December 12, 2012

Good Redbook read - IBM Platform Computing Solutions



A good Redbook read on IBM Platform Computing Solutions. The abstract which is taken from the site,

This IBM® Platform Computing Solutions Redbooks® publication is the first book to describe each of the available offerings that are part of the IBM portfolio of Cloud, analytics, and High Performance Computing (HPC) solutions for our clients. This IBM Redbooks publication delivers descriptions of the available offerings from IBM Platform Computing that address challenges for our clients in each industry. We include a few implementation and testing scenarios with selected solutions............

 The chapters are as followed:

Chapter 1. Introduction to IBM Platform Computing
Chapter 2. Technical computing software portfolio
Chapter 3. Planning
Chapter 4. IBM Platform Load Sharing Facility (LSF) product family
Chapter 5. IBM Platform Symphony
Chapter 6. IBM Platform High Performance Computing
Chapter 7. IBM Platform Cluster Manager Advanced Edition
Appendix A. IBM Platform Computing Message Passing Interface
Appendix B. Troubleshooting examples
Appendix C. IBM Platform Load Sharing Facility add-ons and examples
Appendix D. Getting started with KVM provisioning

Monday, December 10, 2012

Modifying default template for user settings in Linux

If you wish to put or modify a standard template when creating new users, you may wish to put them in the /etc/skel. The /etc/skel acts as a containers where you can out the typical .bashrc .bash_profile .bash_profile or other scripts that you would want all the default users should have. In CentOS, you would typically see

drwxr-xr-x   3 root root  4096 Oct 31 13:12 .
drwxr-xr-x 126 root root 12288 Dec 11 22:48 ..
-rw-r--r--   1 root root    33 Jan 22  2009 .bash_logout
-rw-r--r--   1 root root   290 Oct 31 13:12 .bash_profile
-rw-r--r--   1 root root   176 Jan 22  2009 .bash_profile.old
-rw-r--r--   1 root root   461 Oct 31 13:12 .bashrc
-rw-r--r--   1 root root   124 Jan 22  2009 .bashrc.old
-rw-r--r--   1 root root   515 Jun 15  2008 .emacs
drwxr-xr-x   4 root root  4096 Sep  9  2010 .mozilla
-rw-r--r--   1 root root   658 Sep 22  2009 .zshrc

Do when you do a useradd, you will invoke the following workflow Default values used by useradd command  and inclusion of the template found in /etc/skel

Sunday, December 9, 2012

Black Screen when reconnecting back to old VNC Server when hostname was changed

When I was reconnecting to an old VNC session, I got a black screen and the screen was unresponsive. There was no way to get back to the contents in the screen. Prior to the reconnecting, the hostname on the VNC Server was changed.

VNC uses hostname and the session id for the identification of the session. You can take a look at the contents at ~/.vnc/

$ ls ~/.vnc/

headnode-h00.mycluster.sg:33.pid
headnode-h00.mycluster.sg:33.log
headnode-h00.mycluster.sg:40.pid
headnode-h00.mycluster.sg:40.log
headnode-h00.mycluster.sg:42.log

To get back to any session, and assuming your VNC Server and network are accounted for and has connection, then you have to check the hostname of the server has not been accidentally changed.

$ hostname

headnode-h00.mycluster.sg
If hostname is changed,do read the blog entries 
  1. Changing the hostname on CentOS 
  2. Another look at Changing hostname for CentOS 

Thursday, December 6, 2012

libstdc++.so.5()(64bit) is needed

If you have an error, for example something like this

libstdc++.so.5()(64bit) is needed by gpfs.base-3.4.0-0.x86_64
libstdc++.so.5(CXXABI_1.2)(64bit) is needed by gpfs.base-3.4.0-0.x86_64
libstdc++.so.5(GLIBCPP_3.2)(64bit) is needed by gpfs.base-3.4.0-0.x86_64
libstdc++.so.5(GLIBCPP_3.2.2)(64bit) is needed by gpfs.base-3.4.0-0.x86_64

The error is due to missing legacy libraries compat-libstdc++. For CentOS, just do a

 yum install compat-libstdc++*

Tuesday, December 4, 2012

Debugging gmond issue quickly

If you need to debug gmond issues quickly, use the command

# /usr/sbin/gmond --debug=9

loaded module: core_metrics
loaded module: cpu_module
loaded module: disk_module
loaded module: load_module
loaded module: mem_module
loaded module: net_module
loaded module: proc_module
loaded module: sys_module
loaded module: multicpu_module
udp_recv_channel mcast_join=NULL mcast_if=NULL port=8649 bind=NULL
tcp_accept_channel bind=NULL port=8649
Unable to create tcp_accept_channel. Exiting.

For more information and example see: 
  1. Ganglia Node unable to update Gmetad Node 
  2. Gmond dead but subsys locked for ganglia monitoring daemon

Monday, December 3, 2012

Default values used by useradd command

When the users issues a useradd command, the useradd commands reads the /etc/default/useradd and the /etc/login.defs and determine the default value for useradd. To display the value for /etc/defaults/useradd, see Displaying defaults for useradd

Do read also Modifying default template for user settings in Linux which will add in the files settings for the users.

To read the /etc/login.defs,
# vim /etc/login.defs

# Password aging controls:
#
#       PASS_MAX_DAYS   Maximum number of days a password may be used.
#       PASS_MIN_DAYS   Minimum number of days allowed between password changes.
#       PASS_MIN_LEN    Minimum acceptable password length.
#       PASS_WARN_AGE   Number of days warning given before a password expires.
#
PASS_MAX_DAYS   99999
PASS_MIN_DAYS   0
PASS_MIN_LEN    5
PASS_WARN_AGE   7

#
# Min/max values for automatic uid selection in useradd
#
UID_MIN                   500
UID_MAX                 60000

#
# Min/max values for automatic gid selection in groupadd
#
GID_MIN                   500
GID_MAX                 60000

#
# If defined, this command is run when removing a user.
# It should remove any at/cron/print jobs etc. owned by
# the user to be removed (passed as the first argument).
#
#USERDEL_CMD    /usr/sbin/userdel_local

#
# If useradd should create home directories for users by default
# On RH systems, we do. This option is overridden with the -m flag on
# useradd command line.
#
CREATE_HOME     yes

# The permission mask is initialized to this value. If not specified,
# the permission mask will be initialized to 022.
UMASK           077

# This enables userdel to remove user groups if no members exist.
#
USERGROUPS_ENAB yes

# Use MD5 or DES to encrypt password? Red Hat use MD5 by default.
MD5_CRYPT_ENAB yes

ENCRYPT_METHOD MD5

Friday, November 30, 2012

Quick Listing of users who have current login session

If you wish to have quicklisting of users who are logging on to your servers, you can use the command "users". If a user is running multiple session, they will appear multiple time.

# users
root root user1 user2 user3 user3
There are more comprehensive tools like who and finger. Will write in future blog entries.

Thursday, November 29, 2012

Using host command as an alternative to nslookup

host is a simple utility for performing DNS lookups. It is normally used to convert names to IP addresses and vice versa. When no arguments or options are given, host prints a short summary of its command line arguments and options.

Common basic Usages

Using host command to check resolving DNS Servers
# host www.google.com.sg

www.google.com.sg has address 173.194.38.159
www.google.com.sg has address 173.194.38.151
www.google.com.sg has address 173.194.38.152
www.google.com.sg has IPv6 address 2404:6800:4003:802::1018

Using host command with "-a" to display a query of type ANY

# host -a google.com.sg

Trying "google.com.sg"
;; ->>HEADER<<- 51686="51686" br="br" id:="id:" noerror="noerror" opcode:="opcode:" query="query" status:="status:">;; flags: qr rd ra; QUERY: 1, ANSWER: 14, AUTHORITY: 4, ADDITIONAL: 3

;; QUESTION SECTION:
;google.com.sg.                 IN      ANY

ANSWER SECTION:
google.com.sg.          177     IN      TXT     "v=spf1 -all"
google.com.sg.          86277   IN      SOA     ns1.google.com. dns-admin.google.com. 2012032600 21600 3600 1209600 300
google.com.sg.          177     IN      AAAA    2404:6800:4003:801::101f
google.com.sg.          177     IN      A       74.125.235.63
google.com.sg.          177     IN      A       74.125.235.55
google.com.sg.          177     IN      A       74.125.235.56
google.com.sg.          10677   IN      MX      10 google.com.s9b1.psmtp.com.
google.com.sg.          10677   IN      MX      10 google.com.s9b2.psmtp.com.
google.com.sg.          10677   IN      MX      10 google.com.s9a1.psmtp.com.
google.com.sg.          10677   IN      MX      10 google.com.s9a2.psmtp.com.
google.com.sg.          345477  IN      NS      ns2.google.com.
google.com.sg.          345477  IN      NS      ns3.google.com.
google.com.sg.          345477  IN      NS      ns1.google.com.
google.com.sg.          345477  IN      NS      ns4.google.com.

;; AUTHORITY SECTION:
google.com.sg.          345477  IN      NS      ns2.google.com.
google.com.sg.          345477  IN      NS      ns3.google.com.
google.com.sg.          345477  IN      NS      ns1.google.com.
google.com.sg.          345477  IN      NS      ns4.google.com.

;; ADDITIONAL SECTION:
ns1.google.com.         188413  IN      A       216.239.32.10
ns2.google.com.         188413  IN      A       216.239.34.10
ns3.google.com.         188413  IN      A       216.239.36.10

Using host -t parameter to select the query type
# host -t MX google.com.sg

google.com.sg mail is handled by 10 google.com.s9b2.psmtp.com.
google.com.sg mail is handled by 10 google.com.s9a1.psmtp.com.
google.com.sg mail is handled by 10 google.com.s9a2.psmtp.com.
google.com.sg mail is handled by 10 google.com.s9b1.psmtp.com.

Monday, November 26, 2012

Using free to see memory and cache usage

The utility free is a useful tool for Linux to help to analyse the memory usage

To get the output below, I issued a command
# free -mt
where
-m => Display amount in  megabytes
-t => Add a Total row at the bottom
-s N => Run continuously and update the display  every N seconds
             total       used       free     shared    buffers     cached
Mem:          7871       5995       1875          0        310       3729
-/+ buffers/cache:       1956       5914
Swap:        10047        283       9764
Total:       17919       6279      11640

How do we interpret the results. I would like to credit the article from Determining free memory on Linux for a simple but very good explanation on memory.Here is what I have gathered from the read-up.

Basically, Linux caches blocks from the disk in memory to make data reading as efficient as possible. The buffer cache will shrink to accommodate the increased memory needs.

The actual free memory is column free (1875) + Buffers (310) + Cached (3729). So it is more than we think

Friday, November 23, 2012

Compiling and Installing Boost C++ Libraries on CentOS

Boost C++ libraries provides free peer-reviewed portable C++ source libraries. Boost libraries are intended to be widely useful, and usable across a broad spectrum of applications.

Easy Build and Install (Taken from Getting Started on Unix Variants)

$ cd path/to/boost_1_52_0
$ ./bootstrap.sh --help

Select your configuration options and invoke ./bootstrap.sh again without the --help option. Unless you have write permission in your system's /usr/local/ directory, you'll probably want to at least use
$ ./bootstrap.sh --prefix=/usr/local/boost
$ ./b2 install
This will leave Boost binaries in the lib/ subdirectory of your installation prefix. You will also find a copy of the Boost headers in the include/ subdirectory of the installation prefix, so you can henceforth use that directory as an #include path in place of the Boost root directory.

Thursday, November 22, 2012

JELLYFISH - Fast, Parallel k-mer Counting for DNA


What is Jellyfish - Fast, Parallel k-mer Counting for DNA?
(Taken from Jellyfish Site)
JELLYFISH is a tool for fast, memory-efficient counting of k-mers in DNA. A k-mer is a substring of length k, and counting the occurrences of all such substrings is a central step in many analyses of DNA sequence. JELLYFISH can count k-mers using an order of magnitude less memory and an order of magnitude faster than other k-mer counting packages by using an efficient encoding of a hash table and by exploiting the "compare-and-swap" CPU instruction to increase parallelism.

JELLYFISH is a command-line program that reads FASTA and multi-FASTA files containing DNA sequences. It outputs its k-mer counts in an binary format, which can be translated into a human-readable text format using the "jellyfish dump" command. See the documentation below for more details.



Requirements:

JELLYFISH runs on 64-bit Intel-compatible processors running Linux or FreeBSD (including Intel Macs). It requires GNU GCC to compile.


Download (current version 1.1.6.):
http://www.cbcb.umd.edu/software/jellyfish/jellyfish-1.1.6.tar.gz


Installation:
# ./configure --prefix=/usr/local/jellyfish
# make
# make install

Testing- Test 1
# make check

... 
...
====================
All 19 tests passed
(1 test was not run)
====================
...
...
All tests should pass and 1 test should be skipped (big.sh). Running
'make check' will use about 50MB of disk space and will use every CPUs
found on the machine. On our test machine with 32 cores, it takes a
few minutes to run.

Testing -Test 2
# make check BIG=1

....
....
PASS: tests/generate_sequence.sh
PASS: tests/serial_hashing.sh
PASS: tests/parallel_hashing.sh
PASS: tests/serial_direct_indexing.sh
PASS: tests/parallel_direct_indexing.sh
....
....

Wednesday, November 21, 2012

Basic Installation of Quake - Package to correct substitution sequencing errors in experiments with deep coverage

What is Quake?
(Taken from Quake Site)

Quake is a package to correct substitution sequencing errors in experiments with deep coverage (e.g. >15X), specifically intended for Illumina sequencing reads. Quake adopts the k-mer error correction framework, first introduced by the EULER genome assembly package. Unlike EULER and similar progams, Quake utilizes a robust mixture model of erroneous and genuine k-mer distributions to determine where errors are located. Then Quake uses read quality values and learns the nucleotide to nucleotide error rates to determine what types of errors are most likely. This leads to more corrections and greater accuracy, especially with respect to avoiding mis-corrections, which create false sequence unsimilar to anything in the original genome sequence from which the read was taken.

Setting up is quite straight-forward, just untar in an appropriate directory.
# tar -zxvf quake-0.3.4.tar.gz
# cd Quake\src

Edit the Makefile if you are using Linux (Link CFLAGS to Boost Directory). Boot Software can be downloaded at Boost C++ Libraries
CC=g++
CFLAGS=-O3 -fopenmp -I/usr/local/boost/include/boost -I.
LDFLAGS=-L. -lgzstream -lz
.....
.....

To complete the installation, do a make at the src
 # make

You should see executable in the src

Tuesday, November 20, 2012

IBM Mobile Solution

IBM Mobile Solution based on the Worklight and Mobile Foundation Platform is making waves for Enterprise Mobile Solutioning. Here are some news excerpt

  1. IBM Interactive has already been named a leader in building engaging mobile apps. The IBM Mobile press release, in conjunction with the communications and assets below, continues to build out our mobile story and reinforce IBM as a leader in the mobile space:
  2. Forbes article (IBM Takes Mobile To The Road With Tools & Strategy) illustrates how the IBM mobile portfolio offers a complete solution set. Positive press and analyst reactions serve to further validate the strength of our offerings.
  3. The Wall Street Journal published an IBM Op-Ad entitled, "The Mobile World is Open for Business." The piece established IBM's mobile leadership and highlighted examples of companies using our products and services.


Monday, November 19, 2012

Timestamp for BASH history

If you run to "record" a time-stamp history on your BASH command so that you can do a better tracing of what was run at certain time. It is quite easy to implement.

$ echo 'export HISTTIMEFORMAT="%h/%d - %H:%M:%S "' >> ~/.bashrc

$ source .bashrc

$ history

  ....
  ....
  997  Nov/20 - 13:46:11 vim .bashrc
  998  Nov/20 - 13:46:17 source .bashrc
  999  Nov/20 - 13:46:26 ls
 1000  Nov/20 - 13:46:36 exit
 1001  Nov/20 - 13:46:50 ls
 1002  Nov/20 - 13:46:54 ssh server-c00
 1003  Nov/20 - 13:47:15 ls
 1004  Nov/20 - 13:47:19 ls -al
 1005  Nov/20 - 13:47:27 history
 1006  Nov/20 - 13:56:58 history
 1007  Nov/20 - 13:57:02 history|more
 1008  Nov/20 - 13:58:47 history|more

Friday, November 16, 2012

Graphical Interface to manage runlevels - ntsysv


If you like GUI outlay for chkconfig, you may want to use this utility ntsysv which you can modify very utility. Installation could not be easier.

# yum install ntsysv


================================================================================
 Package            Arch            Version                 Repository     Size
================================================================================
Updating:
 ntsysv             x86_64          1.3.49.3-2.el6          base           29 k
Updating for dependencies:
 chkconfig          x86_64          1.3.49.3-2.el6          base          159 k

Transaction Summary
================================================================================
Upgrade       2 Package(s)

Total download size: 188 k
Is this ok [y/N]:y 

Wednesday, November 14, 2012

Compiling and Installing GAP System for Computational Discrete Algebra

GAP is a system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic The objects. The information is taken from Compilation Step 1: Download the GAP Software Download the GAP Software at http://www.gap-system.org/Releases/index.html Current version at point of writing is 4.5.6
# tar -zxvf gap4r5p6_2012_11_04-18_46.tar.gz
# cd gap4r5p6
Step 2: Configure and Install (Default Installation)
#./configure
# make
Step 3: Optional Installation - GMP packages. If you wish to use the GAP internal GMP Packges, then the version of GMP bundled with this GAP will be used. This is the default.
# ./configure --with-gmp=yes|no|system|"path"
Step 4: Optional Installation - Readline Editing for better command-line Editing. If the argument you supply is yes, then GAP will look in standard locations for a Readline installed on your system. Or you can specify a path to a Readline installation.
# ./configure --with-readline=yes|no|"path"
For more information, See INSTALL when you unpacked the GAP.

Monday, November 12, 2012

Apache Server Setting Mistakes Can Aid Hackers

This article from Michael J. Schwartz on InfomationWeek titled Apache Server Setting Mistakes Can Aid Hackers

According to a study of 10 million websites released last week, more than 2,000 sites -- including big-name businesses such as Cisco, Ford and Staples -- have left the status pages for their Apache servers visible, which could give attackers information that would help them penetrate corporate networks.
.......
.......
According to Apache documentation, the Apache mod_status module "allows a server administrator to find out how well their server is performing," via an HTML page that delivers up-to-date server statistics. "It is basically an HTML page that displays the number of [processes] working, status of each request, IP addresses that are visiting the site, pages that are being queried and things like that. All good," said Cid in a related blog post. 

"However, this feature can also have security implications if you leave it wide open to the world. Anyone would be able to see who is visiting the site, the URLs and sometimes even find hidden -- obscure -- admin panels or files that should not be visible to the outside," he said. "That can help attackers easily find more information about these environments and use them for more complex attacks." 

......
......
For more information, I encourage to read the full article on Apache Server Setting Mistakes Can Aid Hackers

Friday, November 9, 2012

Installing and Configuring Environment Modules on CentOS 6

This tutorial is very similar to Installing and Configuring Environment Modules on CentOS 5 and the steps are very similar for CentOS 6 except that the tcl/tk 8.5.x used in CentOS repository does not have tclConfig.sh which is needed when you compile the Modules packages. I used 8.4.x which is similar to the version used in the CentOS 5 repository. You can use more more updated version of tcl.

See Installing and Configuring Environment Modules on CentOS 6.

Further Information
  1. Installing and Configuring Environment Modules on CentOS 5
  2. Usage of Environment Modules on CentOS and in Cluster

Wednesday, November 7, 2012

Usage of Environment Modules on CentOS and in Cluster


This is the 2nd Part continuation of Installing and Configuring Environment Modules on CentOS 5


In the write-up, Usage of Environment Modules on CentOS and in Cluster, Basic Usage of Module commands like module avail, module load, module unload and module switch.

Tuesday, November 6, 2012

Installing and Configuring Environment Modules on CentOS 5

Here a short tutorial on how to install and configure Environmental Modules on CentOS. See
Installing and Configuring Environment Modules on CentOS 5 for more details.
In the short tutorial,  I have document steps how to
  1. Tools of managing User Environment in Linux
  2. Unpacking, Installing and Configure Environment Modules. The Environment Modules Package can be taken from Environment Modules Project
  3. Creating a sample Module File for Intel Compilers

Sunday, November 4, 2012

Tools of managing User Environment in Linux

As far as I know there are 2 tools used to manage the environment.

1. One tool is SoftEnv Manual
  • Softenv is a system used to build the user's environment. Each user has a ".soft" file in which they specify groups of applications that they're interested in. Softenv reads a central database when necessary to update the user's PATH, MANPATH and other variables. This version of SoftEnv is currently being used at MCS. 
  • Current version 1.6.2. The last time it was updated is 12 March 2007
  • Download Site - Sotftenv

2. The other tool is Modules - Software Environment Management
  • The Environment Modules package provides for the dynamic modification of a user's environment via modulefiles.

    Each modulefile contains the information needed to configure the shell for an application. Once the Modules package is initialized, the environment can be modified on a per-module basis using the module command which interprets modulefiles. Typically modulefiles instruct the module command to alter or set shell environment variables such as PATH, MANPATH, etc. modulefiles may be shared by many users on a system and users may have their own collection to supplement or replace the shared modulefiles. 
  • Current Version modules-3.2.9c. The last update was 19th Nov 2011
  • Download Site - Environment Modules

Further Information

Wednesday, October 31, 2012

Installing NFS4 on CentOS 5 and 6

Taken from Installing NFS4 on CentOS 5 and 6 (my alternative Linux Cluster Blog). This tutorial is a guide on how to install NFSv4 on CentOS 5 and 6.

Step1: Installing the packages
# yum install nfs-utils nfs4-acl-tools portmap
Some facts about the tools above as given from yum info.
nfs-utils -  The nfs-utils package provides a daemon for the kernel NFS server and related tools, which provides a much higher level of performance than the traditional Linux NFS server used by most users.
This package also contains the showmount program.  Showmount queries the mount daemon on a remote host for information about the NFS (Network File System) server on the remote host. For example, showmount can display the clients which are mounted on that host. This package also contains the mount.nfs and umount.nfs program.
nfs4-acl-toolsThis package contains commandline and GUI ACL utilities for the Linux NFSv4 client.
portmap - The portmapper program is a security tool which prevents theft of NIS (YP), NFS and other sensitive information via the portmapper. A portmapper manages RPC connections, which are used by protocols like NFS and NIS.
The portmap package should be installed on any machine which acts as a server for protocols using RPC.


Step 2: Exports the File System from the NFS Server (Similar to NFSv3 except with the inclusion of fsid=0)
/home           192.168.1.0/24(rw,no_root_squash,sync,no_subtree_check,fsid=0)
/install        192.168.1.0/24(rw,no_root_squash,sync,no_subtree_check,fsid=1)
The fsid=0 and fsid=1 option provides a number to use in identifying the filesystem. This number must be different for all the filesystems in /etc/exports that use the fsid option. This option is only necessary for exporting filesystems that reside on a block device with a minor number above 255.one directory can be exported with each fsid option.

Exports the file system
# exportfs -av

Restart the NFS service
# service nfs start
If you are supporting NFSv3,  you have to start portmap as NFSv3 requires them. As such, NFSv4 does not need to interact with rpcbind[1], rpc.lockd, and rpc.statd daemons. For more information see Fedora Chapter 9.  Network File System (NFS) – How it works for a more in-depth understanding.
# service portmap restart


Step 2: Client Mapping
# mount -t nfs4 192.168.1.1:/ /home

Tuesday, October 30, 2012

Updating the udev configuration on CentOS

This is a add-on to the blog entries
  1. "Device eth0 does not seem to be present" on cloned CentOS VM
  2. Cannot get device settings No such device.
After modifying and updating the udev configuration as seen in the 2 blog entries. You can reload the new udev configuration in the memory. Use the command start_udev
# start_udev

Update the network configuration.
# service network restart


Further information,
  1. Look at Changing the ethX to Ethernet Device Mapping in EL6


Monday, October 29, 2012

Tools for OpenFlow


What is OpenFlow?

Taken from http://www.openflow.org/

OpenFlow is an open standard that enables researchers to run experimental protocols in the campus networks we use every day. OpenFlow is added as a feature to commercial Ethernet switches, routers and wireless access points – and provides a standardized hook to allow researchers to run experiments, without requiring vendors to expose the internal workings of their network devices. OpenFlow is currently being implemented by major vendors, with OpenFlow-enabled switches now commercially available.

Tools for Software Defined Network Controller
  1. Floodlight
  2. Nox


Wednesday, October 24, 2012

Go Parallel - Wonderful Resource for Parallel Software Development

Go Parallel Website which is sponsored by Intel and partnership with Geeknet has provided a portal for parallel software development from video, tutorials, videos news etc.

Do checkout the site.

Tuesday, October 23, 2012

NFS4 Information from the University of Michigan

NFSv4 information can be found from the University of Michigan Centre for Information Technology Integration Project NFS Version 4 Open Source Refrence Implementation

I like the rfc3530 definition of NFS v4 definition as written in the site

The Network File System (NFS) version 4 is a distributed filesystem protocol which owes heritage to NFS protocol version 2, RFC 1094, and version 3, RFC 1813. Unlike earlier versions, the NFS version 4 protocol supports traditional file access while integrating support for file locking and the mount protocol. In addition, support for strong security (and its negotiation), compound operations, client caching, and internationalization have been added. Of course, attention has been applied to making NFS version 4 operate well in an Internet environment. 

 Interesting  and relevant information

  1. NFSv4 wiki
    (Includes information on 4.1, pNFS prototype)
  2. Connectathon Test Suite
  3. General troubleshooting recommendations
  4. Performance and Stress tests for NFS

Monday, October 22, 2012

NFS4 Client unable to mount Server NFS4 file

When I was mounting NFSv4 on a CentOS 5 client with a CentOS 6 Server. I receive the error......
# mount -t 192.168.1.1:/tmp /home
mount.nfs4: 192.168.1.1:/tmp failed, reason given by server: 
No such file or directory.

My NSFv4 Server exports
/tmp   192.168.1.0/255.255.255.0(rw,no_root_squash,sync,no_subtree_check,fsid=0)

On the NFSv4 Server, I recall I export file system and restart the nfs server
# exportfs -av
# service nfs start

 I was having the issue, because I fail to understand the characteristics of the NFSv4. In NFSv4, it uses the virtual file system to present the server’s export and associated root filehandles to the client. The keep idea is to look what it mean fsid=0 on the NFS Server. For more information, do look at
A brief look at the difference between NFSv3 and NFSv4

The solution is to
# mount -t nfs4 192.168.1.1:/ /home


and the solution is there.

Friday, October 19, 2012

A brief look at the difference between NFSv3 and NFSv4

There are a few interesting differences between NFSv3 and NFSv4. Comparison of  NFSv3 and NFSv4 is quite hard to obtain and the information is referenced from NFS Version 4 Open Source Project.
From a File System perspective, there are
Export Management
  1. In NFSv3, client must rely on auxiliary protocol, the mount protocol to request a list of server’s exports and obtain root filehandle of a given export. It is fed into the NFS protocol proper once the root filehandle is obtained.
  2. In NFSv4 uses the virtual file system to present the server’s export and associated root filehandles to the client.
  3. NFSv4 defines a special operation to retrieve the Root filehandle and the NFS Server presents the appearance to the client that each export is just a directory in the pseudofs
  4. NFSv4 Pseudo File System is supposed to provide maximum flexibility. Exports Pathname on servers can be changed transparently to clients.
State
  1. NFSv3 is stateless. In other words if the server reboots, the clients can pick up where it left off. No state has been lost.
  2. NFSv3 is typically used with NLM, an auxiliary protocol for file locking. NLM is stateful that the server LOCKD keeps track of locks.
  3. In NFSv4, locking operations are part of the protocol
  4. NFSv4 servers keep track of open files and delegations
Blocking Locks
  1. NFSv3 rely on NLM. Basically, Client process is put to “sleep”. When a callback is received from the server, client process is granted the lock.
  2. For NFSv4, the client to put to sleep, but will poll the server periodically for the lock.
  3. The benefits of the mechanism is that there is one-way reachability from client to server. But it may be less efficient.

Saturday, October 13, 2012

IBM Interconnect 2012 Live Stream

An IBM InterConnect 2012 event in Singapore

View replay in Livestream

  1. A source of global innovation, the art of the possible in growth markets
    John Dunderdale, vice president of Software, IBM Growth Markets
  2. Turning opportunities into outcomes
    Steve Mills, senior vice president and Group Executive, IBM Software & Systems
  3. Unleashing innovation: The new economics of IT
    Rod Adkins, senior vice president, IBM Systems & Technology Group
  4. Managing the velocity of Change
    Robert LeBlanc, senior vice president, IBM Middleware Software
  5. Reinventing relationships and uncovering new markets
    Mike Rhodin, senior vice president, IBM Software Solutions Group
    Jim Bramante, senior vice president, IBM Growth Markets

Friday, October 12, 2012

PBS (Portable Batch System) Commands on Torque

There are some PBS Commands that you can use for your customised PBS templates and scripts. Note: # Remarks: # A line beginning with # is a comments; # A line beginning with #PBS is a pbs command; # Case sensitive. Job Name (Default)
#PBS -N jobname
Specifies the number of nodes (nodes=N) and the number of processors per node (ppn=M) that the job should use
#PBS -l nodes=2:ppn=8
Specifies the maximum amount of physical memory used by any process in the job.
#PBS -l pmem=4gb
Specifies maximum walltime (real time, not CPU time)
#PBS -l walltime=24:00:00
Queue Name (If default is used, there is no need to specify)
#PBS -q fastqueue
Group account (for example, g12345) to be charged
#PBS -W group_list=g12345
Put both normal output and error output into the same output file.
#PBS -j oe
Send me an email when the job begins,end and abort
#PBS -m bea
#PBS -M mymail@mydomain.com
Export all my environment variables to the job
#PBS -V
Rerun this job if it fails
#PBS -r y

Wednesday, October 10, 2012

Predefined Environmental Variables for OpenPBS qsub

The following environment variable reflect the environment when the user run qsub
  1. PBS_O_HOSTThe host where you ran the qsub command.
  2. PBS_O_LOGNAMEYour user ID where you ran qsub
  3. PBS_O_HOMEYour home directory where you ran qsub
  4. PBS_O_WORKDIRThe working directory where you ran qsub

The following reflect the environment where the job is executing
  1. PBS_ENVIRONMENTSet to PBS_BATCH to indicate the job is a batch job, or # to PBS_INTERACTIVE to indicate the job is a PBS interactive job
  2. PBS_O_QUEUEThe original queue you submitted to
  3. PBS_QUEUEThe queue the job is executing from
  4. PBS_JOBNAMEThe job’s name
  5. PBS_NODEFILE - The name of the file containing the list of nodes assigned to the job

Tuesday, October 9, 2012

iWARP, RDMA and TOE

I have captured some basic information on iWARP, RDMA, TOE and RDMA communication....

Remote Direct Access Memory Access (RDMA) allows data to be transferred over a network from the memory of one computer to the memory of another computer without CPU intervention. There are 2 types of RDMA hardware: Infiniband and RDMA over IP (iWARP). OpenFabrics Enterprise Distribution (OFED) stack provides common interface to both types of RDMA hardware.

For more information: iWARP, RDMA and TOE by Linux Cluster

Saturday, October 6, 2012

I/O and filled disk error when running Molpro 2010

I encountered this error when running molpro 2010 binary on a compute node.

ERROR WRITING        32768 WORDS AT OFFSET   20630927. TO FILE 1  IMPLEMENTATION=d
f   FILE HANDLE=  1018  IERR=******
 ? Error 
 ? I/O error
 ? The problem occurs in writew
Write error in iow_direct_write; fd=12, l=32768, p=20630927; write returns -1
This may indicate a filled disk, or that a disk quota has been exceeded
1:1:fehler:: 21556614
(rank:1 hostname:node-c00.cluster.spms.ntu.edu.sg pid:3742):ARMCI DASSERT fail. 
src/armci.c:ARMCI_Error():276 cond:0  1: ARMCI aborting 21556614 (0x148ed86).
Write error in iow_direct_write; fd=12, l=32768, p=20630927; write returns -1
This may indicate a filled disk, or that a disk quota has been exceeded
3:3:fehler:: 21556614
(rank:3 hostname:node-c00.cluster.spms.ntu.edu.sg pid:3745):ARMCI DASSERT fail. 
src/armci.c:ARMCI_Error():276 cond:0  3: ARMCI aborting 21556614 (0x148ed86).
0:0:fehler:: 21556614
(rank:0 hostname:node-c00.cluster.spms.ntu.edu.sg pid:3741):ARMCI DASSERT fail. src/armci.c:ARMCI_Error():276 cond:0
  0: ARMCI aborting 21556614 (0x148ed86).
Write error in iow_direct_write; fd=12, l=32768, p=20630927; write returns -1
This may indicate a filled disk, or that a disk quota has been exceeded
2:2:fehler:: 21556614(rank:2 hostname:node-c00.cluster.spms.ntu.edu.sg pid:3744):ARMCI DASSERT fail. src/armci.c:ARMCI_Error():276 cond:0
  2: ARMCI aborting 21556614 (0x148ed86).
Write error in iow_direct_write; fd=12, l=32768, p=20590741; write returns -1
This may indicate a filled disk, or that a disk quota has been exceeded
5:5:fehler:: 21556614
(rank:5 hostname:node-c00.cluster.spms.ntu.edu.sg pid:3747):ARMCI DASSERT fail. src/armci.c:ARMCI_Error():276 cond:0
  5: ARMCI aborting 21556614 (0x148ed86).
As the error message suggest, there is a filled disk / partition that is used by molpro 2010. Looks for your molprop
  • Scratch file directories, 
  • /tmp
  • quota set by administrators.
All these will cause errors above

Friday, October 5, 2012

Goodbye to VSphere vRAM licensing


With the upcoming release of VSphere 5.1, Vmware is removing the vRAM licensing requirements and returning to previously CPU-based licensing model. You may want to read this interesting article on what is coming up beside removing vTax as MS coined it.

Information:
  1. For a good summary of the new features in VSphere 5.1, do look at VMware releases vSphere 5.1
  2.  Wave good-bye to VMware's unloved vSphere vRAM 'vTax'


Thursday, October 4, 2012

Encountering PBS chdir /home/user1 failed. No such file or directory

If you encounter an error after you have qsub on Torque

PBS: chdir to /home/user1 failed: No such file or directory.

If you set OpenPBS to mail, you may see the issues more clearly
Post job file processing error; job 17676.headnode-h00 on host node-c05/7+node-c05/6+node-c05/5+node-c05/4+node-c05/3+node-c05/2+node-c05/1+node-c05/0Unknown resource type  REJHOST=node-c05 MSG=invalid home directory '/home/user1' specified, errno=2 (No such file or directory)

Apparently, the node 5 has the /home directory unmounted and  the job sent to them will be lost. The solution is very simple, remount the /home directory back again

Friday, September 28, 2012

Flexible Power Solutions from Starline


I came across this interesting flexible power solution from Starline - Busline Power Distribution. The track busway allows different types of customised power supplies and components to be planted on the track and it can be tapped instantly at any location along the track. It is interesting to know that we can put 3-phase and single-phase along the same track.

Do look at
  1. Starline Videos 
  2. Literature
Very need solution for flexible power deployment for DC.



Thursday, September 27, 2012

Qsub and Interactive X Windows on CentOS

If you are using torque as a submission agent and wish to use interactive windows for your allocated node. You can use

Step 1:
Launch vncserver on the head node. For more information how to Using VNC Server on CentOS with Windows VNC Viewer

Step 2: Make sure you have configure the server allow ssh forwarding. At /etc/ssh/sshd_config
...
X11Forwarding yes
X11UseLocalhost yes
...
Restart the sshd services.
# service sshd restart


Step 3: Use the qsub command to launch interactive and. The "-X"
$ qsub -q myqueue -l nodes=1:ppn=8 -I -X


Wednesday, September 26, 2012

Total Reconfiguration of GPFS from scratch again

If you have messed things up in the configuration and wish to redo the entire setup again, you have to do the following. From our training at GPFS, there are 2 advisable ways. The first one is the recommended way. The latter one is the “nuclear” option.

For more information, see  Total Reconfiguration of GPFS from scratch again

Sunday, September 23, 2012

Using Chage to manage password expiration and aging

As administrators, tools like chage to help manage the /etc/shadow information. /etc/shadow contains information such as

myuseid:$xxxxxxxxxeeerrrrr:15607:0:900:10:0::

Column Description
1 UserID
2 Encrypted Password
3 Set the number of days since January 1st, 1970 when the password was last changed.
4 Minimum number of days between password changes to MIN_DAYS
5 Maximum number of days during which a password is valid
6 Set the number of days of warning before a password change is required
7 Set the number of days of inactivity after a password has expired before the account is locked
8 Set the date or number of days since January 1, 1970 on which the userid account will no longer be accessible. The date may also be expressed in the format YYYY-MM-DD

Best if you can use the command chage. For example, you can use the command

1. Listing of  password details
# chage --list  username

Last password change                                    : Feb 03, 2012
Password expires                                        : Jul 22, 2013
Password inactive                                       : Jul 22, 2013
Account expires                                         : never
Minimum number of days between password change          : 0
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 10

2. Disable password aging for an user account
# chage -m 0 -M 99999 -I -1 -E -1 username

-m 0 (Min number of  days between password change to 0)
-M 99999 (Max Number of days between password change to 99999)
-I -1 (Set "Password Inactive" to never)
-E -1 (Set "Account expires" to never)
Last password change                                    : Feb 03, 2012
Password expires                                        : never
Password inactive                                       : never
Account expires                                         : never
Minimum number of days between password change          : 0
Maximum number of days between password change          : 99999
Number of days of warning before password expires       : 10

For more complete information, see
  1.  6.6. Linux Password & Shadow File Formats
  2. 7 Examples to Manage Linux Password Expiration and Aging Using chage

Monday, September 17, 2012

Singapore Inforcomm Resource Marketplace

Some information on Singapore Cloud Initiatives

Cloud Computing initiatives and policy directions in Asia during CloudAsia 2012 by Assistant CEO of iDA

Sharing on Cloud Computing experience by Mr Loo Kian Wai of Diners World Travel during the Singapore Cloud Forum seminar on 27 July 2012.

To view the videos,

Thursday, September 13, 2012

Resolving error libthread-2.0.so.0 on Schrodinger

While executing maestro Schrodinger on CentOS 6, I encountered the following error

/usr/local/schrodinger/maestro-v90211/bin/Linux-x86/maestro: error while loading shared libraries: libgthread-2.0.so.0: cannot open shared object file: No such file or directory
Maestro: Could not load shared library


You have to install glib2 libraries first
# yum install glib2

Dependencies Resolved

===============================================================================================
 Package                 Arch               Version                     Repository        Size
===============================================================================================

Updating:
 glib2                   x86_64             2.22.5-7.el6                base             1.1 M

Updating for dependencies:
 glib2-devel             x86_64             2.22.5-7.el6                base             1.3 M

Transaction Summary
===============================================================================================
Upgrade       2 Package(s)

Total download size: 2.4 M
Is this ok [y/N]: Y

Now install libgthread-2.0.so.0 and its dependencies

# yum install libgthread-2.0.so.0

Dependencies Resolved

===============================================================================================
 Package             Arch               Version                       Repository          Size
===============================================================================================

Installing:
 glib2               i686               2.22.5-7.el6                  base               1.1 M
Installing for dependencies:
 gamin               i686               0.1.10-9.el6                  base               120 k

Transaction Summary
===============================================================================================
Install       2 Package(s)

Total download size: 1.2 M
Installed size: 5.4 M
Is this ok [y/N]: N
Exiting on user Command 

Wednesday, September 12, 2012

Udev Rules Documention

Taken from the documentation Writing udev rules

Udev is targetted to provide a userspace for a dynamic /dev directory with persistent device naming. If  you need to lock down your device naming, do take a look at this good writeup

Tuesday, September 11, 2012

Unable to open /dev/sdb with fdisk

Fdisk is a menu driven program for creation and manipulation of partition tables. The device is usually something like /dev/sda, /dev/sdb. A device name refers to the entire disks. /dev/sd? is the partition of the device. For example, /dev/sda1 refers to the first partition of the first device.

If you issued a command and you receive a corresponding message "unable to open /dev/sdb"
# fdisk /dev/sdb

Unable to open /dev/sdb

Linux is unable to locate or find the partition. One method to verify that it is so, do a listing of the
devices fdisk can see. In this example below, the partition has been created already.

# fdisk -l

Disk /dev/sdb: 2997.4 GB, 2997426536960 bytes
255 heads, 63 sectors/track, 364416 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      267349  2147480811   83  Linux

WARNING: The size of this disk is 3.0 TB (2997400633344 bytes).
DOS partition table format can not be used on drives for volumes
larger than 2.2 TB (2199023255040 bytes). Use parted(1) and GUID
partition table format (GPT).

Once you have verified the presence of the device, do a fdisk /dev/sdb again

Friday, September 7, 2012

Adding time for dd to test and analyse read and write performance

This is an extension of a previous blog entry Using dd to test and analyse read and write performance. If you add time to dd

# time dd if=/dev/zero of=/home/myaccount/outfile bs=4M count=4096

4096+0 records in
4096+0 records out
17179869184 bytes (17 GB) copied, 136.832 seconds, 126 MB/s
real    2m16.834s
user    0m0.017s
sys     0m12.670s

Thursday, September 6, 2012

Patch Release by Oracle for Zero day Vulnerability Alert

Oracle had posted the update release information on the link below:
  1. Update release notes  - http://www.oracle.com/technetwork/java/javase/7u7-relnotes-1835816.html
  2. Description of alert (Oracle Security Alert for CVE-2012-4681) and patch information - http://www.oracle.com/technetwork/topics/security/alert-cve-2012-4681-1835715.html
For more information:
  1. The Advisory for Security Alert CVE-2012-4681 is located at http://www.oracle.com/technetwork/topics/security/alert-cve-2012-4681-1835715.html
  2. Users can verify that they’re running the most recent version of Java by visiting: http://java.com/en/download/installed.jsp 
  3. Instructions on removing older (and less secure) versions of Java can be found at http://java.com/en/download/faq/remove_olderversions.xml 

Tuesday, September 4, 2012

Adding new LUN dynamically in CentOS

After adding the new LUN(s) from SAN to CentOS Linux,

Step 1: Run the the command “rescan-scsi-bus.sh” , to dynamically detect and activate the new LUN. To understand this utility "rescan-scsi-bus.sh, see Scanning for SCSI new devices dynamically on CentOS

# /usr/bin/rescan-scsi-bus.sh -l

Host adapter 0 (aacraid) found.
Host adapter 1 (ata_piix) found.
Host adapter 2 (ata_piix) found.
Host adapter 3 (qla2xxx) found.
Host adapter 4 (qla2xxx) found.
Scanning SCSI subsystem for new devices
Scanning host 0 for  SCSI target IDs  0 1 2 3 4 5 6 7, LUNs Scanning 
0 1 2 3 4 5 6 7
Scanning for device 0 0 0 0 ...
OLD: Host: scsi0 Channel: 00 Id: 00 Lun: 00
      Vendor: ServeRA  Model: A                Rev: V1.0 
      Type:   Direct-Access                    ANSI SCSI revision: 02
Scanning for device 0 1 0 0 ...
OLD: Host: scsi0 Channel: 01 Id: 00 Lun: 00
      Vendor: IBM-ESXS Model: VPA146C3-ETS10 N Rev: A650
      Type:   Direct-Access                    ANSI SCSI revision: 05
.....
.....
..... 
0 new device(s) found.
0 device(s) removed.
Since LUN are not physical disk, there may not be "new devices" detected, but look at the middle of the information "Scanning host 0 for  SCSI target IDs  0 1 2 3 4 5 6 7, LUNs  0 1 2 3 4 5 6 7"


Step 2: Verify the LUN has been added. You the command lsscsi. If you do not have the utility, do a yum install

# yum install lsscsi

# lsscsi

..... 
.....
[3:0:0:0]    disk    IBM      1814      FAStT  0916  /dev/sdb
[3:0:0:1]    disk    IBM      1814      FAStT  0916  /dev/sdc
[3:0:0:2]    disk    IBM      1814      FAStT  0916  /dev/sdd
[3:0:0:3]    disk    IBM      1814      FAStT  0916  /dev/sde
[3:0:0:4]    disk    IBM      1814      FAStT  0916  /dev/sdf
[3:0:0:5]    disk    IBM      1814      FAStT  0916  /dev/sdg
[3:0:0:6]    disk    IBM      1814      FAStT  0916  /dev/sdn
.....
.....

Alternatively, you can use the command fdisk -l to check whether your LUN is represented in the /dev directory like /dev/sd*
# fdisk -l

.....
.....
Disk /dev/sdr: 536.8 GB, 536870912000 bytes
255 heads, 63 sectors/track, 65270 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdr1               1       65270   524281243+  83  Linux

Disk /dev/sds: 536.8 GB, 536870912000 bytes
255 heads, 63 sectors/track, 65270 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sds1               1       65270   524281243+  83  Linux