The Switcher Android Trojan uses infected Android devices to attack wireless routers by performing brute force attacks on the routers’ admin web interfaces. If the attacks succeed, Switcher hijacks the Domain Name Server (DNS) by changing the IP addresses of the DNS servers in the router settings and then reroutes all DNS queries to the attackers’ servers. As a result, Switcher is able to redirect all connected users to malicious IP addresses when they enter legitimate domain addresses, thereby exposing them to a broad range of attacks including phishing and malware infection.
There is currently no indication of Switcher infection in Singapore. However, Singapore users should nevertheless adopt the necessary preventive measures to avoid potential infection.
References
Saturday, December 31, 2016
Tuesday, December 20, 2016
Compiling glibc-2.14 on CentOS 6
Step 1: Download glibc-2.14 from GNU Site
Step 2: Untar and Preparation
Step 3: Compile and install
# wget http://ftp.gnu.org/gnu/glibc/glibc-2.14.tar.gz
Step 2: Untar and Preparation
# tar zxvf glibc-2.14.tar.gz # cd glibc-2.14 # mkdir build # cd build
Step 3: Compile and install
# ../configure --prefix=/usr/local/glibc-2.14 # make -j8 # make install
Friday, December 2, 2016
Error polling HP CQ with status WORK REQUEST FLUSHED ERROR status on LSF Platform
I was encountering "Error polling HP CQ with status WORK REQUEST FLUSHED ERROR status" during OpenMPI run and it was occuring randomly.
I suspect it is due to nodes issue. I checked the LSF /opt/lsf/log/sbatchd.log.comp001. It is definitely an authentication issue with AD. I'm using centrify.
I did a
and then restart centrify services. Alternatively, you can reboot if you want a clean start.
The OpenMPI could run again.
I suspect it is due to nodes issue. I checked the LSF /opt/lsf/log/sbatchd.log.comp001. It is definitely an authentication issue with AD. I'm using centrify.
acctMapTo: No valid user name found for job 149044, userName(mr_x) failed:Success runEexec: getOSUid_() failed. Bad user ID
I did a
$ badmin hclose comp001
and then restart centrify services. Alternatively, you can reboot if you want a clean start.
The OpenMPI could run again.
Thursday, November 3, 2016
IBM Platform Cluster Manager Community Edition
IBM Platform Cluster Manager Community Edition has been released with no charge.
To Download, do here
Supported Platform
For more information, see IBM Platform Cluster Manager Community Edition
Platform Cluster Manager Community Edition is easy-to-use, powerful cluster management software for technical computing users. It delivers a comprehensive set of functions to help manage hardware and software from the infrastructure level. It automates the deployment of the operating system and software components, and complex activities, such as application cluster creation and maintenance of a system.
The community edition offering of Platform Cluster Manager Community Edition, uses a centralized user interface from where system administrators can manage a complex cluster as a single system. It offers the flexibility for users to add customized features that are based on specific requirements of their environment. It also provides a kit framework for easy software deployment. It also has the ability to set up enable a mutlitenant, multi-cluster environment.
The community edition offering of Platform Cluster Manager Community Edition, uses a centralized user interface from where system administrators can manage a complex cluster as a single system. It offers the flexibility for users to add customized features that are based on specific requirements of their environment. It also provides a kit framework for easy software deployment. It also has the ability to set up enable a mutlitenant, multi-cluster environment.
Supported Platform
Management Node | Compute Node |
CentOS 6.6 | CentOS 6.6, CentOS 6.5 |
RHEL 6.7 | RHEL 6.7, RHEL 6.6, RHEL 6.5, RHEL 5.11 CentOS 6.6, CentOS 6.5, CentOS 5.11 RHELSC 6.6, RHELSC 6.5, RHELSC 5.11 |
RHEL 7.1 | RHEL 7.1, RHEL 7.0, RHEL 6.6, RHEL 6.5, RHEL 5.11 CentOS 7.0, CentOS 6.6, CentOS 6.5, CentOS 5.11 RHELSC 7.0, RHELSC 6.6, RHELSC 6.5 |
For more information, see IBM Platform Cluster Manager Community Edition
Tuesday, October 25, 2016
Kernel Local Privilege Escalation - CVE-2016-5195
Taken from RedHat (https://access.redhat.com/security/vulnerabilities/2706661)
Background Information
A race condition was found in the way the Linux kernel's memory subsystem handled the copy-on-write (COW) breakage of private read-only memory mappings. An unprivileged local user could use this flaw to gain write access to otherwise read-only memory mappings and thus increase their privileges on the system.
This could be abused by an attacker to modify existing setuid files with instructions to elevate privileges. An exploit using this technique has been found in the wild. This flaw affects most modern Linux distributions.
Red Hat Product Security has rated this update as having a security impact of Important.
Impacted Products:
The following Red Hat Product versions are impacted:
• Red Hat Enterprise Linux 5
• Red Hat Enterprise Linux 6
• Red Hat Enterprise Linux 7
• Red Hat Enterprise MRG 2
• Red Hat Openshift Online v2
Attack Description and Impact:This flaw allows an attacker with a local system account to modify on-disk binaries, bypassing the standard permission mechanisms that would prevent modification without an appropriate permission set. This is achieved by racing the madvise(MADV_DONTNEED) system call while having the page of the executable mmapped in memory.
Take Action:All Red Hat customers running the affected versions of the kernel are strongly recommended to update the kernel as soon as patches are available. Details about impacted packages as well as recommended mitigation are noted below. A system reboot is required in order for the kernel update to be applied.
Mitigation:Please reference bug 1384344 - https://bugzilla.redhat.com/show_bug.cgi?id=1384344#c13 for detailed mitigation steps.
Updates for Affected Products:
A kpatch for customers running Red Hat Enterprise Linux 7.2 or greater will be available. Please open a support case to gain access to the kpatch.
For more details about what a kpatch is: Is live kernel patching (kpatch) supported in RHEL 7? - please refer to https://access.redhat.com/solutions/2206511
Background Information
A race condition was found in the way the Linux kernel's memory subsystem handled the copy-on-write (COW) breakage of private read-only memory mappings. An unprivileged local user could use this flaw to gain write access to otherwise read-only memory mappings and thus increase their privileges on the system.
This could be abused by an attacker to modify existing setuid files with instructions to elevate privileges. An exploit using this technique has been found in the wild. This flaw affects most modern Linux distributions.
Red Hat Product Security has rated this update as having a security impact of Important.
Impacted Products:
The following Red Hat Product versions are impacted:
• Red Hat Enterprise Linux 5
• Red Hat Enterprise Linux 6
• Red Hat Enterprise Linux 7
• Red Hat Enterprise MRG 2
• Red Hat Openshift Online v2
Attack Description and Impact:This flaw allows an attacker with a local system account to modify on-disk binaries, bypassing the standard permission mechanisms that would prevent modification without an appropriate permission set. This is achieved by racing the madvise(MADV_DONTNEED) system call while having the page of the executable mmapped in memory.
Take Action:All Red Hat customers running the affected versions of the kernel are strongly recommended to update the kernel as soon as patches are available. Details about impacted packages as well as recommended mitigation are noted below. A system reboot is required in order for the kernel update to be applied.
Mitigation:Please reference bug 1384344 - https://bugzilla.redhat.com/show_bug.cgi?id=1384344#c13 for detailed mitigation steps.
Updates for Affected Products:
A kpatch for customers running Red Hat Enterprise Linux 7.2 or greater will be available. Please open a support case to gain access to the kpatch.
For more details about what a kpatch is: Is live kernel patching (kpatch) supported in RHEL 7? - please refer to https://access.redhat.com/solutions/2206511
Monday, October 17, 2016
Offline Nodes in MOAB
Change State of MOAB Clients Nodes
To offline the nodes
To flush the nodes
To reserve the nodes
To delete nodes
To offline the nodes
# mnodectl -m state=drained node1
To flush the nodes
# mnodectl -m state=flush node1
To reserve the nodes
# mnodectl -m state=reserved node1
To delete nodes
# mnodectl -d node1
Friday, October 14, 2016
LAMMPS Tools and Packmol with Intel Fortran
PACKMOL information can be obtained from http://www.ime.unicamp.br/~martinez/packmol/userguide.shtml#conv
Installing can be found at http://www.ime.unicamp.br/~martinez/packmol/userguide.shtml#comp
1. Compile Packmol with Intel Fortran
2. LAMMPS Tools
3. Make sure the Python has the following libraries in create_conf (sys, os, logging, argparse, math random) 4. Make sure the Python (if you install lammps-tool)
Installing can be found at http://www.ime.unicamp.br/~martinez/packmol/userguide.shtml#comp
1. Compile Packmol with Intel Fortran
# tar -zxvf packmol.tar.gz # cd packmol # ./configure ifort # make
2. LAMMPS Tools
# git clone https://github.com/jdevemy/lammps-tools.git # cd lammps-tools # python setup.py build # sudo python setup.py install
3. Make sure the Python has the following libraries in create_conf (sys, os, logging, argparse, math random) 4. Make sure the Python (if you install lammps-tool)
# export PYTHONPATH=/home/user1/Downloads/lammps-tools-master/lib # ./create_conf
Wednesday, October 12, 2016
Blank Screen with VNCServers on CentOS 6
I was getting a blank screen when I do a launch vncserver session and my /home/user1/.vnc/xstartup
is like this:
After I remodify the xstartup, it WORKED!
is like this:
#!/bin/sh [ -r /etc/sysconfig/i18n ] && . /etc/sysconfig/i18n export LANG export SYSFONT vncconfig -iconic & unset SESSION_MANAGER unset DBUS_SESSION_BUS_ADDRESS OS=`uname -s` if [ $OS = 'Linux' ]; then case "$WINDOWMANAGER" in *gnome*) if [ -e /etc/SuSE-release ]; then PATH=$PATH:/opt/gnome/bin export PATH fi ;; esac fi if [ -x /etc/X11/xinit/xinitrc ]; then exec /etc/X11/xinit/xinitrc fi if [ -f /etc/X11/xinit/xinitrc ]; then exec sh /etc/X11/xinit/xinitrc fi [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources xsetroot -solid grey xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & gnome-session &
After I remodify the xstartup, it WORKED!
#!/bin/sh # Uncomment the following two lines for normal desktop: # unset SESSION_MANAGER # exec /etc/X11/xinit/xinitrc [ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources xsetroot -solid grey vncconfig -iconic & xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & gnome-session &
Tuesday, October 11, 2016
Resolve Leap Second Issues in Red Hat Enterprise Linux
Taken from Resolve Leap Second Issues in Red Hat Enterprise Linux
Leap seconds are a periodic one-second adjustment of Coordinated Universal Time(UTC) in order to keep a system's time of day close to the mean solar time. However, the Earth's rotation speed varies in response to climatic and geological events, and due to this, UTC leap seconds are irregularly spaced and unpredictable.
Upcoming Leap Second Events:
The next leap second will occur on 2016 December 31, 23h 59m 60s UTC.
Environment:
Red Hat Enterprise Linux versions 4
Red Hat Enterprise Linux versions 5
Red Hat Enterprise Linux versions 6
Red Hat Enterprise Linux versions 7
Scope:
Customers running highly time-sensitive or un-patched RHEL servers.
Severity:
The severity depends on how far behind the customer in on updating RHEL and how sensitive their operations are to time adjustments. Some customers will just appreciate the news. Others running un-patched servers may experience kernel hangs.
Description:
Another leap second will be added on December 31, 2016.
Customers running RHEL servers that are completely patched and running NTP should not be concerned. (Applications should be fine, too, but it is always best to check with one's vendors.)
Customers running completely patched RHEL servers but not NTP will find their systems' times off by 1 second. Customers will need to manually correct that.
Customers running un-patched servers that cannot update their kernel, ntp and tzdata packages to at least the latest versions listed in the below document's "Known Issues" section's links should contact our Support Center for further assistance.
Resource:
Resolve Leap Second Issues in Red Hat Enterprise Linux: https://access.redhat.com/articles/15145#event
Leap seconds are a periodic one-second adjustment of Coordinated Universal Time(UTC) in order to keep a system's time of day close to the mean solar time. However, the Earth's rotation speed varies in response to climatic and geological events, and due to this, UTC leap seconds are irregularly spaced and unpredictable.
Upcoming Leap Second Events:
The next leap second will occur on 2016 December 31, 23h 59m 60s UTC.
Environment:
Red Hat Enterprise Linux versions 4
Red Hat Enterprise Linux versions 5
Red Hat Enterprise Linux versions 6
Red Hat Enterprise Linux versions 7
Scope:
Customers running highly time-sensitive or un-patched RHEL servers.
Severity:
The severity depends on how far behind the customer in on updating RHEL and how sensitive their operations are to time adjustments. Some customers will just appreciate the news. Others running un-patched servers may experience kernel hangs.
Description:
Another leap second will be added on December 31, 2016.
Customers running RHEL servers that are completely patched and running NTP should not be concerned. (Applications should be fine, too, but it is always best to check with one's vendors.)
Customers running completely patched RHEL servers but not NTP will find their systems' times off by 1 second. Customers will need to manually correct that.
Customers running un-patched servers that cannot update their kernel, ntp and tzdata packages to at least the latest versions listed in the below document's "Known Issues" section's links should contact our Support Center for further assistance.
Resource:
Resolve Leap Second Issues in Red Hat Enterprise Linux: https://access.redhat.com/articles/15145#event
Monday, October 3, 2016
Compiling MEEP with Intel-15.0.6, Intel-MPI 5.0.3 and HDFT-1.8.17
Meep (or MEEP) is a free finite-difference time-domain (FDTD) simulation software package developed at MIT to model electromagnetic systems, along with our MPB eigenmode package. The latest official version is 1.3 and can be found at Download Page for Meep
Before you compile Meep 1.2.1, you need to first compile the libctl library. Compiling the libctl library is quite straightforward. After downloading,
Step 1: Compiling libctl-3.2.1
Step 2: Other Prerequisites include guile and guile-devel. Do make sure you install these 2 packages which can be done
Step 3: Prepare Intel Compilers and Intel MPI environment
Step 4: Compiling hdf5-1.8.17 See Compiling HDF5-1.8.17 with Intel-15.0.6 and Intel-MPI-5.0.6
Step 5: Compiling MEEP-1.3
# tar -zxvf libctl-3.2.1.tar.gz
# cd libctl-3.2.1
# ./configure --prefix=/usr/local/libctl-3.2.1
# make -j8
# make install
Step 2: Other Prerequisites include guile and guile-devel. Do make sure you install these 2 packages which can be done
# yum install guile guile-devel
Step 3: Prepare Intel Compilers and Intel MPI environment
$ vim .bashrc
source /usr/local/intel_2015/bin/compilervars.sh intel64 source /usr/local/intel_2015/impi/5.0.3.049/bin64/mpivars.sh intel64 source /usr/local/intel_2015/mkl/bin/mklvars.sh intel64 export CC=icc export CXX=icpc export F77=ifort export MPICC=mpicc export MPICXX=mpiicpc export CFLAGS="-O3 -xHost -fno-alias -align" export FFLAGS="-O3 -xHost -fno-alias -align" export CXXFLAGS="-O3 -xHost -fno-alias -align" export FFlags="-I/usr/local/intel_2015/impi/5.0.3.049/include64 -L/usr/local/intel_2015/impi/5.0.3.049/lib64"
Step 4: Compiling hdf5-1.8.17 See Compiling HDF5-1.8.17 with Intel-15.0.6 and Intel-MPI-5.0.6
Step 5: Compiling MEEP-1.3
$ ./configure --prefix=/usr/local/meep-1.3.1_impi-5.0.3 --with-mpi \ --with-libctl="/usr/local/libctl-3.2.1/share/libctl" \ LDFLAGS="-L/usr/local/libctl-3.2.1/lib -L/usr/local/hdf5-1.8.17/lib" \ CPPFLAGS="-I/usr/local/libctl-3.2.1/include -I/usr/local/hdf5-1.8.17/include" $ make -j 12 $ make install
Friday, September 30, 2016
Compiling HDF5-1.8.17 with Intel-15.0.6 and Intel-MPI-5.0.6
Step 1: Preparing the prerequisites
Step 2: Compile zlib-1 See Compile zlib-1.2.8 with Intel-15.0.6 Step 3: Configure the HDF5
$ vim .bashrc
source /usr/local/intel_2015/bin/compilervars.sh intel64 source /usr/local/intel_2015/impi/5.0.3.049/bin64/mpivars.sh intel64 source /usr/local/intel_2015/mkl/bin/mklvars.sh intel64 export CC=icc export CXX=icpc export F77=ifort export MPICC=mpicc export MPICXX=mpiicpc export CFLAGS="-O3 -xHost -fno-alias -align" export FFLAGS="-O3 -xHost -fno-alias -align" export CXXFLAGS="-O3 -xHost -fno-alias -align" export FFlags="-I/usr/local/intel_2015/impi/5.0.3.049/include64 -L/usr/local/intel_2015/impi/5.0.3.049/lib64"
Step 2: Compile zlib-1 See Compile zlib-1.2.8 with Intel-15.0.6 Step 3: Configure the HDF5
$ tar -zxvf hdf5-1.8.17.tar.gz $ cd hdf5-1.8.17 $ ./configure --prefix=/usr/local/hdf5-1.8.17 --enable-fortran --enable-cxx $ make $ make check $ make installReferences
Compile zlib-1.2.8 with Intel-15.0.6
Step 1: Download the zlib from HDF5 (https://support.hdfgroup.org/HDF5/release/obtain5.html)
In your .bashrc, you can
At your command prompt,
In your .bashrc, you can
source /usr/local/intel_2015/bin/compilervars.sh intel64 source /usr/local/intel_2015/impi/5.0.3.049/bin64/mpivars.sh intel64 source /usr/local/intel_2015/mkl/bin/mklvars.sh intel64 export CC=icc export CFLAGS='-O3 -xHost -ip'
At your command prompt,
$ tar -zxvf zlib-1.2.8.tar.gz $ cd zlib-1.2.8 $ ./configure --prefix=/usr/local/zlib-1.2.7 $ make $ make check $ make installReferences:
Monday, September 26, 2016
Python-yaml Libraries for Python 3
If you are searching for Python-yaml and not able to locate the libraries, you might want to try
You should see something like pyyaml
# ./pip search yaml
You should see something like pyyaml
# ./pip install -upgrade pyyaml
Friday, September 23, 2016
X Unable to launch
I was trying to launch X Windows after yum install yum groupinstall "Desktop" "Desktop Platform" "General Purpose Desktop"
But I was not able to launch X. Instead I got
To solve this,
But I was not able to launch X. Instead I got
Fatal server error: [ 9491.484] could not open default font 'fixed' [ 9491.484] (EE) Please consult the CentOS support at http://wiki.centos.org/Documentation for help. (done) [ 9491.484] (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information.
To solve this,
# yum -y install libXfont
Friday, September 16, 2016
Ctrl-C caught... cleaning up processes
We were submitting a NAMD Job and we have an error logs as well as
"Ctrl-C caught... cleaning up processes"To solve the issues, you may want to first decide to whether there are Windows Format traces in the input files
$ dos2unix inputfile
[mpiexec@comp175] HYD_pmcd_pmiserv_send_signal (./pm/pmiserv/pmiserv_cb.c:239): assert (!closed) failed [mpiexec@comp175] ui_cmd_cb (./pm/pmiserv/pmiserv_pmci.c:127): unable to send SIGUSR1 downstream [mpiexec@comp175] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status [mpiexec@comp175] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:435): error waiting for event [mpiexec@comp175] main (./ui/mpich/mpiexec.c:901): process manager error waiting for completion
Monday, September 5, 2016
NetApp and OpenStack
https://netapp.github.io/openstack/
2. Continuous Operations with OpenStack’s File-Share Service (Manila)
https://www.youtube.com/watch?v=Mvhx1gRgUUI
https://www.youtube.com/watch?v=5YZPHc2y7XY
Thursday, September 1, 2016
Tuesday, August 30, 2016
Compiling NAMD-2.11 with Intel 2013-SP1 and iMPI 4.1.3 and FFTW-2.1.5
Step 1: Prepare for Environment Setup
Step 2: Building FFTW-2.1.5 with Intel
Step 3: Building CHARM-6.7.0
Step 4: Building TCL-8.5.9
Step 5: Setup the CHARMBASE in $NAMD_SRC for Make.charm
Step 6: Setup the FFTW architecture files in $NAMD_SRC/arch
Step 7: Setup the TCL architecture files in $NAMD_SRC/arch
Step 8: Setup the $NAMD_SRC/Linux-x86_64-ics-2013.arch
Step 9: Compile the Code.
You should see the namd2 executable
Step 10: MPIRUN
References:
source /usr/local/intel_2013sp1/composerxe/mkl/bin/mklvars.sh intel64 source /usr/local/intel_2013sp1/composerxe/bin/compilervars.sh intel64 source /usr/local/intel_2013sp1/impi/4.1.3.048/intel64/bin/mpivars.sh intel64 source /usr/local/intel_2013sp1/composerxe/tbb/bin/tbbvars.sh intel64 source /usr/local/intel_2013sp1/itac/8.1.4.045/intel64/bin/itacvars.sh export CC=icc export CXX=icpc export F77=ifort export F90=ifort
Step 2: Building FFTW-2.1.5 with Intel
$ wget http://www.fftw.org/fftw-2.1.5.tar.gz $ tar -zxvf fftw-2.1.5.tar.gz $ cd fftw-2.1.5 $ ./configure F77=ifort CC=icc CFLAGS=-O3 FFLAGS=-O3 --enable-threads --enable-float --enable-type-prefix --prefix=/usr/local/fftw-2.1.5_intel-4.1.3 $ make -j 16 $ make install
Step 3: Building CHARM-6.7.0
$ tar -zxvf NAMD_2.11_Source.tar.gz $ export $NAMD_SRC=$PWD/NAMD_2.11_Source $ cd $NAMD_SRC $ tar -xvf charm-6.7.0.tar $ MPICXX=mpiicpc CXX=icpc ./build charm++ mpi-linux-x86_64 mpicxx ifort --with-production --no-shared -O3 -DCMK_OPTIMIZE=1 $ cd $NAMD_SRC
Step 4: Building TCL-8.5.9
$ wget http://www.ks.uiuc.edu/Research/namd/libraries/tcl8.5.9-linux-x86_64.tar.gz $ wget http://www.ks.uiuc.edu/Research/namd/libraries/tcl8.5.9-linux-x86_64-threaded.tar.gz $ tar xzf tcl8.5.9-linux-x86_64.tar.gz $ tar xzf tcl8.5.9-linux-x86_64-threaded.tar.gz $ mv tcl8.5.9-linux-x86_64 tcl-8.5.9 $ mv tcl8.5.9-linux-x86_64-threaded tcl-8.5.9-threaded
Step 5: Setup the CHARMBASE in $NAMD_SRC for Make.charm
# Set CHARMBASE to the top level charm directory. # The config script will override this setting if there is a directory # called charm-6.7.0 or charm in the NAMD base directory. CHARMBASE = /home/user1/NAMD/NAMD_2.11_Source/charm-6.7.0
Step 6: Setup the FFTW architecture files in $NAMD_SRC/arch
$ vim $NAMD_SRC/arch/Linux-x86_64.fftw
FFTDIR=/usr/local/fftw-2.1.5_intel-14.0.2 FFTINCL=-I$(FFTDIR)/include FFTLIB=-L$(FFTDIR)/lib -lsrfftw -lsfftw FFTFLAGS=-DNAMD_FFTW FFT=$(FFTINCL) $(FFTFLAGS)
Step 7: Setup the TCL architecture files in $NAMD_SRC/arch
$ vim $NAMD_SRC/arch/Linux-x86_64.tcl
TCLDIR=/usr/local/tcl8.5.9-threaded TCLINCL=-I$(TCLDIR)/include TCLLIB=-L$(TCLDIR)/lib -ltcl8.5 -ldl -lpthread TCLFLAGS=-DNAMD_TCL TCL=$(TCLINCL) $(TCLFLAGS)
Step 8: Setup the $NAMD_SRC/Linux-x86_64-ics-2013.arch
$ vim Linux-x86_64-ics-2013.arch
NAMD_ARCH = Linux-x86_64 CHARMARCH = mpi-linux-x86_64-ifort-mpicxx FLOATOPTS = -O2 CXX = icpc -std=c++11 CXXOPTS = -static-intel -O2 $(FLOATOPTS) CXXNOALIASOPTS = -O3 -fno-alias $(FLOATOPTS) CC = icc COPTS = -static-intel -O2 $(FLOATOPTS)
Step 9: Compile the Code.
$ ./config Linux-x86_64-ics-2013 --charm-base ./charm-6.7.0 --charm-arch mpi-linux-x86_64-ifort-mpicxx $ cd Linux-x86_64-ics-2013 $ make -j 16
You should see the namd2 executable
Step 10: MPIRUN
$ mpirun -np 32 -machinefile $MACHINEFILE namd2 something.conf > job$LSB_JOBID.log
References:
Tuesday, August 23, 2016
Installing NAMD 2.9 with Intel Cluster Studio 2013 with IB support
There is a very good article of Installing NAMD 2.9 with Intel Cluster Studio 2013 with IB support. You may want to take a look at
How to install NAMD 2.9 with Intel Cluster Studio 2013 on Intel Sandy Bridge architecture and IB support
How to install NAMD 2.9 with Intel Cluster Studio 2013 on Intel Sandy Bridge architecture and IB support
Saturday, August 20, 2016
List of mkl_solver* libraries are deprecated libraries since version 10.2 Update 2
Taken from mkl_solver* libraries are deprecated libraries since version 10.2 Update 2
Since version 10.2 update 2 of Intel® MKL,
all components of Direct Solver (Pardiso and DSS), Trust-Region (TR) Solver, Iterative Sparse Solver (ISS) and GNU Multiple Precision (GMP) were moved to standard MKL libraries.
So now solver ( e.g: mkl_solver.lib and mkl_solver_sequential.lib for IA32 ) libraries are
empty (for backward compatibility).
The list of deprecated libraries are the following:
Intel® MKL for Linux:
lib/32/libmkl_solver.a
lib/32/libmkl_solver_sequential.a
lib/em64t/libmkl_solver_ilp64.a
lib/em64t/libmkl_solver_ilp64_sequential.a
lib/em64t/libmkl_solver_lp64.a
lib/em64t/libmkl_solver_lp64_sequential.a
lib/ia64/libmkl_solver_ilp64.a
lib/ia64/libmkl_solver_ilp64_sequential.a
lib/ia64/libmkl_solver_lp64.a
lib/ia64/libmkl_solver_lp64_sequential.a
Therefore, the updated linking line will look like:
Linking on Intel®64:
static linking:
ifort pardiso.f -L$MKLPATH -I$MKLINCLUDE \
-Wl,--start-group \
$MKLPATH/libmkl_intel_lp64.a $MKLPATH/libmkl_intel_thread.a $MKLPATH/libmkl_core.a \
-Wl,--end-group -liomp5 -lpthread
dynamic linking:
ifort pardiso.f -L$MKLPATH -I$MKLINCLUDE \
-lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread
where:
In these examples,
MKLPATH=$MKLROOT/lib/em64t
MKLINCLUDE=$MKLROOT/include.
Since version 10.2 update 2 of Intel® MKL,
all components of Direct Solver (Pardiso and DSS), Trust-Region (TR) Solver, Iterative Sparse Solver (ISS) and GNU Multiple Precision (GMP) were moved to standard MKL libraries.
So now solver ( e.g: mkl_solver.lib and mkl_solver_sequential.lib for IA32 ) libraries are
empty (for backward compatibility).
The list of deprecated libraries are the following:
Intel® MKL for Linux:
lib/32/libmkl_solver.a
lib/32/libmkl_solver_sequential.a
lib/em64t/libmkl_solver_ilp64.a
lib/em64t/libmkl_solver_ilp64_sequential.a
lib/em64t/libmkl_solver_lp64.a
lib/em64t/libmkl_solver_lp64_sequential.a
lib/ia64/libmkl_solver_ilp64.a
lib/ia64/libmkl_solver_ilp64_sequential.a
lib/ia64/libmkl_solver_lp64.a
lib/ia64/libmkl_solver_lp64_sequential.a
Therefore, the updated linking line will look like:
Linking on Intel®64:
static linking:
ifort pardiso.f -L$MKLPATH -I$MKLINCLUDE \
-Wl,--start-group \
$MKLPATH/libmkl_intel_lp64.a $MKLPATH/libmkl_intel_thread.a $MKLPATH/libmkl_core.a \
-Wl,--end-group -liomp5 -lpthread
dynamic linking:
ifort pardiso.f -L$MKLPATH -I$MKLINCLUDE \
-lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread
where:
In these examples,
MKLPATH=$MKLROOT/lib/em64t
MKLINCLUDE=$MKLROOT/include.
Thursday, August 18, 2016
Compiling CPMD-3.17.1 with Intel-13.0.1.117 and OpenMPI-1.8.3
I’m assuming you have compiled OpenMPI with Intel Compiler. If you are not sure, you can look at Blog Entry
Compiling OpenMPI 1.6.5 with Intel 12.1.5 on CentOS 6
To get the source code from CPMD, please go to http://www.cpmd.org/
Step 1: From the CPMD Directory
Step 2: I’m using CentOS 6 internal Blas, lapack and atlas. Make sure your configure the one below.
Step 3: Compile CPMD
Step 4: Pathing
Make sure your $PATH reflect the path of the executable cpmd.x. It is also important to ensure that you check that the libraries are properly linked to the executable
Step 5: Test your executable. You have to go to CPMD Consortium to download the cpmd-test.tar.gz for testing.
Compiling OpenMPI 1.6.5 with Intel 12.1.5 on CentOS 6
To get the source code from CPMD, please go to http://www.cpmd.org/
Step 1: From the CPMD Directory
cd ~/CPMD-3.13.2/SOURCE
./mkconfig.sh IFORT-AMD64-MPI > Makefile
Step 2: I’m using CentOS 6 internal Blas, lapack and atlas. Make sure your configure the one below.
#--------------- Default Configuration for IFORT-AMD64-MPI --------------- SRC = . DEST = . BIN = . FFLAGS = -pc64 -tpp6 -O2 -unroll #LFLAGS = -L. -latlas_x86_64 LFLAGS = -L/usr/lib64/atlas -llapack -lblas CFLAGS = -O2 -Wall -m64 CPP = /lib/cpp -P -C -traditional CPPFLAGS = -D__Linux -D__PGI -DFFT_DEFAULT -DPOINTER8 -DLINUX_IFC \ -DPARALLEL NOOPT_FLAG = CC = mpicc FC = mpif77 -c LD = mpif77 -i-static AR = ar #----------------------------------------------------------------------------
Step 3: Compile CPMD
# makeIf the compilation succeed, it should generate a cpmd.x executable.
Step 4: Pathing
Make sure your $PATH reflect the path of the executable cpmd.x. It is also important to ensure that you check that the libraries are properly linked to the executable
# ldd cpmd.x
Step 5: Test your executable. You have to go to CPMD Consortium to download the cpmd-test.tar.gz for testing.
I/O Direction for Bash
This is a good article on I/O Redirection. I have always been interest to redirect stderr to stdout. So here is it 2>&1
2>&1
# Redirects stderr to stdout.
# Error messages get sent to same place as standard output.
>>filename 2>&1
bad_command >>filename 2>&1
# Appends both stdout and stderr to the file "filename" ...
2>&1 | [command(s)]
bad_command 2>&1 | awk '{print $5}' # found
# Sends stderr through a pipe.
# |& was added to Bash 4 as an abbreviation for 2>&1 |.
Tuesday, August 16, 2016
Enable Centrify Agent to read UID and GID from Centrify DirectManage Access Manager
We purchased Centrify Standard and setup the DirectManage Access
Manager. Next we proceed to install the client agent on the compute
node.
After unpacking and installing the agent, when we do a
Apparently, the getent passwd |grep kittycool is pulling both the Active Directory UID and the DirectManage Access and the user UID differs
To resolve this issue, you need to specify the zone which is used by DirectManage Access Manager, so your UID of the user will pick from the DirectManage Access Manager.
To check it is displaying the correct UID and GID,
After unpacking and installing the agent, when we do a
# getent passwd |grep kittycool kittycool:x:1304567321211:1304567321211:kittycool:/home/kittycool:/bin/bash kittycool:x:10001:10001:kittycool:/home/kittycool:/bin/bash
Apparently, the getent passwd |grep kittycool is pulling both the Active Directory UID and the DirectManage Access and the user UID differs
To resolve this issue, you need to specify the zone which is used by DirectManage Access Manager, so your UID of the user will pick from the DirectManage Access Manager.
# adjoin -z cluster -u OU_Administrator staff.mycompany.com.sg -c "staff.mycompany.com.sg/HPC/Computers"
To check it is displaying the correct UID and GID,
# getent passwd |grep kittycool kittycool:x:10001:10001:kittycool:/home/kittycool:/bin/bash
LSF retained the original Max Locked Memory and not the updated one
The value of “max locked memory” has been modified at the operating system level, but LSF still returns the original value.
Symptoms before updating max locked memory
To resolve this issue,
References:
Symptoms before updating max locked memory
[user1@cluster-h00 ~]$ bsub -m compute-node1 -I ulimit -a Job <32400> is submitted to default queue. < 32400>> < > core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 1027790 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 4096 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 1027790 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
To resolve this issue,
# badmin hshutdown # badmin hstartup
[user1@cluster-h00 ~]$ bsub -q gpgpu -m compute-node1 -I ulimit -a Job <32490> is submitted to queue. < 32490>> < > core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 515133 max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 4096 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 515133 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
References:
Algorithm negotiation failed for SSH Secure Shell Client
If you are using the dated SSH Secure Shell Client 3.2.9, you may have issue connect to the more updated OpenSSH Server.
If you cannot change the client (which is recommended), you will have to update the OpenSSH Server on Linux. Add this in
References:
If you cannot change the client (which is recommended), you will have to update the OpenSSH Server on Linux. Add this in
# vim /etc/ssh/sshd_config
# Ciphers Ciphers aes128-cbc,aes192-cbc,aes256-cbc,blowfish-cbc,arcfour KexAlgorithms diffie-hellman-group1-sha1*If you are using Centrify-OpenSSH, you have to modify /etc/centrifydc/ssh/sshd_config and do the same
References:
Wednesday, August 10, 2016
Booting using UEFI for Lenovo Server Hardware
To boot CentOS 7.2 using UEFI. Do the following
Step 1: Choose Boot manager
Step 2 : Scroll down and look for Boot Mode
Step 3: In Boot Mode, choose System Boot mode as Uefi and Legacy Only,
Step 4: Go back to Boot manager,
Step 5: Choose Boot from file
Step 6: Choose No volume label
Step 7: Choose EFI
Step 8: Choose Boot,
Step 9: Choose bootx64,
Subscribe to:
Posts (Atom)