Saturday, May 31, 2014

Installing and Compiling Mankai Common Lisp on CentOS 5

ManKai Common Lisp (MKCL) aims to be a full implementation of the Common Lisp language in compliance with the ANSI X3J13 Common Lisp standard.

MKCL supports the operating systems Linux and Microsoft Windows, running on top of Intel x86 or AMD64 compatible processors.

For more information on installation of MKCL, do look at http://common-lisp.net/project/mkcl/

Do note that Starting with MKCL 1.1.0, MKCL requires the platform supplied GMP library, and GMP development version when compiling MKCL from its sources.

Installing GMP can be taken from a section Compiling GNU 4.7.2 on CentOS 5

GMP
  Download the following prerequistics applications libraries from ftp://gcc.gnu.org/pub/gcc/infrastructure/
1. Install gmp-4.3.2

# bunzip2 gmp-4.3.2.tar.bz2
# tar -zxvf gmp-4.3.2.tar
# cd gmp-4.3.2
# ./configure --prefix=/usr/local/gmp-4.3.2
# make
# make install
Update your .bashrc
.....
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/gmp-4.3.2/lib........
......

Compiling mkl
# tar xvf mkcl-1.1.8.tar.gz
# cd mkcl-1.1.8
#./configure
# make -j 8 install

Thursday, May 29, 2014

Install Adobe Reader 9 on CentOS 6.5

1. Download acroread or Adobe Reader 9 from Adobe Reader
# wget http://ardownload.adobe.com/pub/adobe/reader/unix/9.x/9.5.5/enu/AdbeRdr9.5.5-1_i486linux_enu.rpm

2. Install related packages used by Adobe Reader (acroread)
# yum install nspluginwrapper.i686 libcanberra-gtk2.i686 gtk2-engines.i686 PackageKit-gtk-module.i686

3. Finally install Adobe Reader (acroread)
# yum localinstall AdbeRdr9.5.5-1_i486linux_enu.rpm

4. Test
# acroread

Monday, May 26, 2014

MVAPICH - Announcing the release of MVAPICH2 2.0rc2 and MVAPICH2-X 2.0rc2

The MVAPICH team is pleased to announce the release of MVAPICH2 2.0rc2 and MVAPICH2-X 2.0rc2 (Hybrid MPI+PGAS (OpenSHMEM) with Unified Communication Runtime).

Features, Enhancements, and Bug Fixes for MVAPICH2 2.0rc2 (since
MVAPICH2 2.0rc1 release) are listed here.

* Features and Enhancements (since 2.0rc1):
    - CMA support is now enabled by default
    - Optimization of collectives with CMA support
    - RMA optimizations for shared memory and atomic operations
    - Tuning RGET and Atomics operations
    - Tuning RDMA FP-based communication 
    - MPI-T support for additional performance and control variables
    - The --enable-mpit-pvars=yes configuration option will now
      enable only MVAPICH2 specific variables
    - Large message transfer support for PSM interface
    - Optimization of collectives for PSM interface
    - Updated to hwloc v1.9

* Bug-Fixes (since 2.0rc1):
    - Fix multicast hang when there is a single process on one node
      and more than one process on other nodes
    - Fix non-power-of-two usage of scatter-doubling-allgather algorithm
    - Fix for bcastzero type hang during finalize
    - Enhanced handling of failures in RDMA_CM based
      connection establishment
    - Fix for a hang in finalize when using RDMA_CM
    - Finish receive request when RDMA READ completes in RGET protocol
    - Always use direct RDMA when flush is used
    - Fix compilation error with --enable-g=all in PSM interface
    - Fix warnings and memory leaks

MVAPICH2-X 2.0rc2 software package provides support for hybrid MPI+PGAS (UPC and OpenSHMEM) programming models with unified communication runtime for emerging exascale systems. This software package provides flexibility for users to write applications using the following programming models with a unified communication runtime: MPI, MPI+OpenMP, pure UPC, and pure OpenSHMEM programs as well as hybrid MPI(+OpenMP) + PGAS (UPC and OpenSHMEM) programs.

Features and enhancements for MVAPICH2-X 2.0rc2 (since MVAPICH2-X 2.0rc1) are as follows:

* Features and Enhancements (since 2.0rc1):
    - MPI Features
        - Based on MVAPICH2 2.0rc2 (OFA-IB-CH3 interface)

    - Unified Runtime Features
        - Based on MVAPICH2 2.0rc2 (OFA-IB-CH3 interface). All the
          runtime features enabled by default in OFA-IB-CH3 interface
          of MVAPICH2 2.0rc2 are available in MVAPICH2-X 2.0rc2

For downloading MVAPICH2 2.0rc2 and MVAPICH2-X 2.0rc2, associated user guides, quick start guide, and accessing the SVN, please visit the following URL: http://mvapich.cse.ohio-state.edu

Sunday, May 25, 2014

Quick Check to identify broken disk for NetApp Ontall 8.2p2 Cluster mode

1. Identify the Cluster Nodes
My-NetApp-Cluster::> cluster show

Node                  Health  Eligibility
--------------------- ------- ------------
cluster1-01      true    true
cluster1-02      true    true
cluster1-03      true    true
cluster1-04      true    true
4 entries were displayed.

2. Check for Broken Disk
My-NetApp-Cluster::> run -node cluster1-01 vol status -f
RAID Disk Device   HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
--------- ------   ------------- ---- ---- ---- ----- --------------    --------------
failed   0a.00.12 0a    0   12  SA:A   -  BSAS  7200 1695466/3472315904 1695759/3472914816 
failed   0b.00.9  0b    0   9   SA:B   -  BSAS  7200 1695466/3472315904 1695759/3472914816 

3. Get System Information

My-NetApp-Cluster::> run -node cluster1-01 sysconfig -a
NetApp Release 8.2P2 Cluster-Mode: Sat Jul 20 20:31:47 PDT 2013
.....
.....
.....

Saturday, May 24, 2014

Assigning ownership to disks for NetApp Storage

After replacing broken disks for NetApp Storage and if you want to manually assigned ownership back to the newly replaced but assigned disks, you can use the following command

1. Show Storage Ownership Information

My-NetApp-Cluster::> storage disk show -spare
Original Owner: acai-cluster1-01
  Checksum Compatibility: block
                                                            Usable Physical
    Disk            HA Shelf Bay Chan   Pool  Type    RPM     Size     Size Owner
    --------------- ------------ ---- ------ ----- ------ -------- -------- --------
    cluster1-01:0b.00.9
                    0b     0   9    B  Pool0  BSAS   7200   1.62TB   1.62TB cluster1-01
Original Owner: cluster1-02
  Checksum Compatibility: block
                                                            Usable Physical
    Disk            HA Shelf Bay Chan   Pool  Type    RPM     Size     Size Owner
    --------------- ------------ ---- ------ ----- ------ -------- -------- --------
    cluster1-02:0a.00.7
                    0a     0   7    B  Pool0  BSAS   7200   1.62TB   1.62TB cluster1-02
.....
.....

2. Display all unowned disks by entering the following command:
My-NetApp-Cluster::> storage disk show -container-type unassigned


                     Usable           Container
Disk                   Size Shelf Bay Type        Position   Aggregate Owner
---------------- ---------- ----- --- ----------- ---------- --------- --------
cluster1-01:0b.00.9
                     1.62TB     0   9 spare       present    -         cluster1-01
cluster1-02:0a.00.7
                     1.62TB     0   7 spare       present    -         cluster1-02

3. Assign each disk by entering the following command
My-NetApp-Cluster::> storage disk assign -disk cluster1-01:0b.00.9 -owner cluster1-01

Friday, May 23, 2014

Scalapack libraries for Intel MKL 11 not installed by default

Since MKL 11.0, Intel scalapack libraries are not installed by default. To install Intel scalapack libraries, do the following

Step 1. Select "Change components to modify"
Step 2: Select "Intel (R) Math Kernel Library 11.1 Update 2
Step 3: Cluster Support and you will get your scalapack libraries installed



Thursday, May 22, 2014

Outputing Remote Screen and save on Windows Desktop with Putty

Putty is one of the most popular SSH client used especially on Windows Clients to access Linux. One of the most overlooked function is that putty can log screen output and save it to your windows desktop.

Once you have selected and loaded the remote host, click the Session Logging.
  • At Session Logging select the All session output
  • Under Log File name: Browse and Select where you want the output.log to be placed.


Friday, May 16, 2014

Summary of Job Scheduler Command Comparison Table

If you are using different schedulers and you need to find equivalent command for each of the scheduler. Do take a look at Rosetta Stone of Workload Managers

The table lists the most common command, environment variables, and job specification options used by the major workload management systems: PBS/Torque, Slurm, LSF, SGE and LoadLeveler. Each of these workload managers has unique features, but the most commonly used functionality is available in all of these environments as listed in the table.

Thursday, May 15, 2014

Nanyang Technological University bridges innovation gap with cloud computing

Taken from Networks Asia



Nangyang Technological University needed to support academic activities with scalable computing resources. It was also faced with the challenge of moving workloads from a private university cloud to AWS during peak times while keeping data secure. The school also had to monitor resources in both on-premise and public cloud environments.
.......
NTU, based in Singapore, also deployed hybrid cloud infrastructure to allow one contiguous network from NTU to Amazon using AWS Direct Connect. The school also attained secure data storage (NetApp Data Storage Systems) using a storage box mapped to NTU’s private storage.

By deploying the Red Hat Cloud Infrastructure, NTU increased scalability of resources without compromising security. It also achieved greater efficiency through automatic resource provisioning. The new infrastructure also allowed better use of existing resources and saved costs.

......

For more information, see Nanyang Technological University bridges innovation gap with cloud computing

Tuesday, May 13, 2014

A relook at libibverbs: Warning: RLIMIT_MEMLOCK is 32768 bytes. This will severely limit memory registrations.

There was an prior blog entry written in Oct 2009.libibverbs: Warning: RLIMIT_MEMLOCK is 32768 bytes. This will severely limit memory registrations.

I would like to add on to this entry. In the FAQ 17 from OpenMPI,  17. I'm still getting errors about "error registering openib memory"; what do I do?, the FAQ mentioned about the scheduler

 Make sure that the resource manager daemons are started with unlimited memlock limits (which may involve editing the resource manager daemon startup script, or some other system-wide location that allows the resource manager daemon to get an unlimited limit of locked memory). Otherwise, jobs that are started under that resource manager will get the default locked memory limits, which are far too small for Open MPI.

The files in limits.d (or the limits.conf file) does not usually apply to resource daemons! The limits.s files usually only applies to rsh or ssh-based logins. Hence, daemons usually inherit the system default of maximum 32k of locked memory (which then gets passed down to the MPI processes that they start). To increase this limit, you typically need to modify daemons' startup scripts to increase the limit before they drop root privliedges.

Some resource managers can limit the amount of locked memory that is made available to jobs. For example, SLURM has some fine-grained controls that allow locked memory for only SLURM jobs (i.e., the system's default is low memory lock limits, but SLURM jobs can get high memory lock limits). See these FAQ items on the SLURM web site for more details: propagating limits and using PAM.


Other related Issues

1. For Torque, you may want to tweak the /etc/init.d/pbs_mom See blog entry Default ulimit setting in torque overide ulimit setting

# service pbs_mom restart

2. See also Encountering Segmentation Fault, Bus Error or No output . In that blog, you have to edit /etc/security/limit.conf

* soft memlock unlimited
* hard memlock unlimited

3. If you still have memory issues and using Mellanox IB Cards, do take a look at Registering sufficent memory for OpenIB when using Mellanox HCA

Monday, May 12, 2014

NTU win Silver Award at the 2014 Asia Student Supercomputer Challenge (ASC14)

 Taken from  SPMS Website



Congratulations to Team NTU for winning the Silver Award at the 2014 Asia Student Supercomputer Challenge (ASC14) held in Sun Yat-sen University, Guangzhou (China).

In the finals, the finalists had to build their own supercomputer system architecture within a 3000W power budget and execute a series of applications including HPL (Linpack), Quantum Espresso, LICOM, SU2, and mpiBLAST (secret application). In the code optimisation category, the 3D-EW code was executed on the world’s fastest supercomputer, Tianhe-2.

During the finals, a new Linpack record was established at 9.272TFlops by Sun Yat-sen University. It was Team NTU's maiden attempt and they were competing amongst veteran teams from China (National University of Defense Technology, Huazhong University of Science and Technology).
Team NTU was described as the 'dark horse' of this challenge during the preliminary and the final rounds, and they overcame all odds to emerge runners-up in this multinational challenge.

Sunday, May 11, 2014

Understanding the Parameter for ulimit in a glance on CentOS

If you need to look and understand the parameters of the ulimit. Just use the command

# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 311296
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 311296
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Look at the output and you will see that the parameter are within the ()

For example, if you wish to tweak the max locked memory to be unlimited, you can use -l
# ulimit -l unlimited 

Or

For example if you wish to  tweak the core file size to be unlimited, you can use -c
# ulimit -c unlimited

Wednesday, May 7, 2014

Fire-Eye unqiue approach to combat malware

I thought we should take a look at Fire-Eye approach to combating malware. NetworkWorld Review: FireEye fights off multi-stage malware has written a interesting article on FireEye. Here are some excerpts.....

FireEye takes a new approach to malware detection with its NX appliances. As this Clear Choice test shows, the FireEye device allows advanced malware to proceed – but only onto virtual machines running inside the appliance.

Conventional approaches to fighting malware have limitations in combating multi-stage malware threats. A signature-based system might detect the existence of a malware binary file, but only once it’s been reassembled on the target – and by then the target is already compromised. Newer sandbox systems stop traffic before it reaches target machines, but they may not be able to assemble and analyze all the constituent parts of a multi-stage attack. Indeed, a key step in some exploit kits is to “fingerprint” versions of the hypervisor, OS, browser, and plug-ins before deciding whether to proceed.
......
......
Virtualization is FireEye’s key differentiator. Its appliances run multiple versions of Windows OSs, browsers, and plug-ins, each in its own virtual machine. Malware actually compromises a target (virtual) machine – and then and only then does the FireEye software record a successful attack. Network managers can configure the FireEye appliance to block such attacks, preventing their spread into the enterprise.
.......
......
FireEye’s technology complements rather than replaces an intrusion detection system (IDS). Unlike an IDS or IPS, it doesn’t have a library of thousands of attack signatures. Instead, it looks for actual compromises on its virtual machines......
......
.....
The appliance’s virtual machines represent various service pack levels of Windows 7 and Windows XP, along with many combinations of browser and Adobe Flash and Microsoft Silverlight versions. FireEye wrote its own hypervisor that makes virtual machines appear to run on bare metal. That’s useful to thwart exploit kits that skip execution on machines if they detect VMware hypervisors.......

E297: Write error in swap file

If you open up a file using an editor and you see a quick error "E297: Write error in swap file", it is likely your partition is full or your quota has been reached. Do the necessary check and clean-up

Monday, May 5, 2014

mothur Software

The Mothur Project seeks to develop a single piece of open-source, expandable software to fill the bioinformatics needs of the microbial ecology community. mothur is the most cited bioinformatics tool for analyzing 16S rRNA gene sequences and can be easily used to analyze data generated by Sanger, PacBio, IonTorrent, 454 and Illumina (HiSeq/MiSeq).

Installation is total breeze. First thing take a look at Download Mothur Page . There is a CentOS executable there including the GUI

Make a new directory and unzip Mothur.cen_64.zip
# mkdir mothur-1.33.3
# cd mothur-1.33.3
# unzip Mothur.cen_64.zip

Make a new directory and unzip MothurGEN.cen_64.zip
# mkdir MothurGUI.cen_64.zip
# cd  mothur-1.33.3
# unzip MothurGUI.cen_64.zip