Tuesday, December 30, 2014

Forcibly kill or purge the Job in the Torque Scheduler

When there is a job stuck and cannot be remove by a normal qdel, you can use the command qdel -p jobid. Do note that this command should be used when there is no other way to kill off the job in the usual fashion especially if the compute node is unresponsive.

# qdel -p jobID

References:
  1. [torqueusers] qdel will not delete

Thursday, December 25, 2014

Checking for Torque Server Version Number

To check Torque Version Number, do issue the command
# qstat --version
Version: 4.2.7
Commit: xxxxxxxxxxxxxxxxxxxxxx

cannot change directory to /home/user1. Permission denied on NFS mount

If you encountered the "cannot change directory to /home/user1. Permission denied on NFS mount" when you do a su --login user1. Do check the base directory permission. If the owner and group /home is root.root, do remember to chmod 755

# ls -ld /home
drwx------ 7 root root 8192 Dec 22 15:13 home

Change the permission to
# chmod 755 /home
drwxr-xr-x 7 root root 8192 Dec 22 15:13 home

Tuesday, December 23, 2014

Displaying SPICE on the VM network for RHEV 3.4

For more information, do take a look at my blog Displaying SPICE on the VM network for RHEV 3.4

The key issue is that after after selecting the network to house the "Display Network", do remember to boot all the VMs

Monday, December 22, 2014

Using log collector in RHEV 3.3 and above to collect full log

The Log Collector Utility for RHEV 3 is located in /usr/bin/rhevm-log-collector and is provided by the rhevm-log-collector package installed on the RHEV manager system.

 1. To collect all the information, use command
# engine-log-collector
INFO: Gathering oVirt Engine information...
INFO: Gathering PostgreSQL the oVirt Engine database and log files from localhost...
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to skip):
About to collect information from 1 hypervisors. Continue? (Y/n): y
INFO: Gathering information from selected hypervisors...
INFO: collecting information from 192.168.50.56
INFO: finished collecting information from 192.168.50.56
Creating compressed archive...

2. To collect information from selected hosts ending with ending in .11 and .15
# engine-log-collector --hosts=*.11,*.15

3. To collect information from the RHEV-M only
# engine-log-collector --no-hypervisors
References
  1. https://access.redhat.com/solutions/61546

Friday, December 19, 2014

Intel NIC driver causing multicast flooding (intermittent wired network disconnection)

Symptom:
The symptom can range from random disconnection to slowness in the entire school/building wired network. Eventually, the cause of this problem was found to be due to intel-chipset nic card (Intel I2xx/825xx series) sending out erratic & massive multicast traffic, causing flooding of the network and high CPU on the switches. The below link are some url which describe the same problem faced by other user environment:


Resolution:
The recommended step to resolve this problem is to upgrade the intel nic card driver to version 19.0 and above.


References:
  1. IPv6 multicast flood during sleep from i217-LM
  2. ICMPv6 'Multicast Listener Report' messages are flooding the local network 
  3. ICMPv6 'Multicast Listener Report' messages flooding the local network

Monday, December 15, 2014

NetApp SteelStore Cloud Integrated Storage Appliance

Taken from NetApp SteelStore

Quick Overview

Use the NetApp® SteelStore™ cloud integrated storage appliance to leverage public and private cloud as part of your backup and archive strategy.

Features (from the website) includes
  • Integrates with all leading backup solutions and all major public and private cloud providers.
  • Offers complete, end-to-end security for data at rest and in flight using FIPS 140-2 certified encryption.
  • Uses efficient, variable-length inline deduplication and compression, reducing storage costs up to 90%.
  • Delivers fast, intelligent local backup and recovery based on local storage.
  • Vaults older versions to the cloud, allowing for rapid restores with offsite protection.
  • Supports policy-based data lifecycle management.
  • Scales to an effective capacity of 28PB per appliance.
Technical Specification

Red Hat Atomic and Containers

Red Hat and Containers

Articles
  1. Small footprint, big impact: Red Hat Enterprise Linux 7 Atomic Host Beta now available
  2. Splitting the Atom: Recapping the First Atomic Application Forum
  3. Containers – There’s No Going It Alone
Atomic Video
  1.  Red Hat Enterprise Linux 7 Atomic Host & Containers
Blog About Atomic Performance
  1.  Performance Testing Red Hat Enterprise Linux 7 Atomic Host Beta on Amazon EC2

Thursday, December 11, 2014

Encountering 'Write Failed: Broken Pipe" suring SSH connection on CentOS

I encountered this error Write Failed: Broken Pipe" during SSH connection. From what I know, it is caused by the Linux Server severing connections that have been idle too long ago.

To solve the issues, you can do the following

1. At your Linux Server Side, you can configure
# vim /etc/ssh/sshd_config

.....
ClientAliveInterval 60
.....


2. At your Client Side
# vim ~/.ssh/config 

.....
ServerAliveInterval 60
.....

Tuesday, December 9, 2014

LRZ HPC Cluster Storage Systems

I thought I pen down the LRZ HPC Cluster Storage Systems usage of NetApp and GPFS File System to support their computing needs.

Storage Systems
SuperMUC has a powerful I/O-Subsystem which helps to process large amounts of data generated by simulations.

Home file systems
Permanent storage for data and programs is provided by a 16-node NAS cluster from Netapp. This primary cluster has a capacity of 2 Petabytes and has demonstrated an aggregated throughput of more than 10 GB/s using NFSv3. Netapp's Ontap 8 "Cluster-mode" provides a single namespace for several hundred project volumes on the system. Users can access multiple snapshots of data in their home directories.

Data is regularly replicated to a separate 4-node Netapp cluster with another 2 PB of storage for recovery purposes. Replication uses Snapmirror-technology and runs with up to 2 GB/s in this setup.

Storage hardware consists of >3400 SATA-Disks with 2 TB each protected by double-parity RAID and integrated checksum


Work and Scratch areas
For highest-performance checkpoint I/O IBM's General Parallel File System (GPFS) with 10 PB of capacity and an aggregated throughput of 200 GB/s is available. Disk storage subsystems were built by DDN.


References:
  1.  SuperMUC Petascale System

Monday, December 8, 2014

Changing local and group ownership

I usually use the following commands to change ownership of a file or directory

# chown username.usergroups myfile.

But sometimes the users name contain a fullstop in the username. In that case, the following
# chown user.name:usergroups myfile

Thursday, December 4, 2014

Yum giving "Cannot retrieve metalink for repository: epel" Error for CentOS 6

When I was updating yum install or doing yum update on CentOS 6.4, I received an error "Cannot retrieve metalink for repository: epel"

The error is because of the scripts in the epel.repo under the mirrorlist pointing to https instead of http. If you amend it to http, the epel repo will work

 At /etc/yum/repos.d/epel.repo, change to

[epel]
.....
.....
mirrorlist=http://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch

[epel-debuginfo]
.....
.....
mirrorlist=http://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch

References:
  1. CentOS 6.3 Instance Giving "Cannot retrieve metalink for repository: epel" Error

Thursday, November 27, 2014

Alert-Out-of-Band Security Updates for Adobe Flash Player

Adobe has released security updates for Adobe Flash Player 15.0.0.223 and earlier versions for Windows and Macintosh and Adobe Flash Player 11.2.202.418 and earlier versions for Linux.

Update Adobe Flash Player to the latest version

References:

Understanding Formatted Capacity versus Unformated Capacity

Have you wondered why Formatted Capacity is lesser than the Unformated Capacity of a Hard Disk. Do take a look at this article Formatted capacity confusion clarified

Storage hardware is using the base 10 system and software is using the base 2 system. So no storage is actually lost, it is just a question of how the information is represented.

Wednesday, November 26, 2014

Compiling udunits-2.1.24 on CentOS 6

The UDUNITS package supports units of physical quantities. Its C library provides for arithmetic manipulation of units and for conver

Step 1: Download udunits-2.1.24 from ftp://ftp.unidata.ucar.edu/pub/udunits/

Step 2:Untar and compile
# tar -zxvf udunits-2.1.24
# cd udunits-2.1.24
# ./configure --prefix=/usr/local/udunits-2.1.24 CC=gcc CXX=g++
# make 
# make install

Compling ANTLR 2.7.7 on CentOS 6

What is ANTLR?
ANTLR, ANother Tool for Language Recognition, (formerly PCCTS) is a language tool that provides a framework for constructing recognizers, compilers, and translators from grammatical descriptions containing Java, C#, C++, or Python actions. ANTLR provides excellent support for tree construction, tree walking, and translation. There are

Step 1: Download ANTLR 2.7.7

Step 2: Untar ANTLR-2.7.7
# tar -zxvf antlr-2.7.7
# antlr-2.7.7

Step 3: For RHEL and CentOS, edit the source file /root/antlr-2.7.7/lib/cpp/antlr/CharScanner.hpp
# vim /root/antlr-2.7.7/lib/cpp/antlr/CharScanner.hpp

Add the following into the CharScanner.hpp file


Step 4: Compile the antlr-2.7.7
# ./configure --prefix=/usr/local/antlr2.7.7 --disable-examples
# make -j 8
# make install

References:
  1. http://sourceforge.net/p/nco/discussion/9830/thread/08ae0201

Saturday, November 22, 2014

HTTP Server Prone To Slow Denial Of Service Attack

1. For Apache HTTPD Server:
Upgrade to the latest version that has "mod_reqtimeout" module support available by default.
Then enable the module "mod_reqtimeout" and configure it to set the timeout and minimum data rate for receiving requests,

See my screenshot below


RequestReadTimeout header=10-20,minrate=500
RequestReadTimeout body=10,minrate=500

For a complete write-up see Using mod_reqtimeout to make HTTP Server less vulnerable for DOS Attack for CentOS

References:
  1. Apache Module mod_reqtimeout
  2. Using mod_reqtimeout to make HTTP Server less vulnerable for DOS Attack for CentOS

Tuesday, November 18, 2014

Install GCC 4.8.1 and other Scientitic Packages via Yum on CentOS

Do take a look at Linux @ CERN for the documentation on how to use yum to install devtoolset which contain the following packages. The latest version for CentOS 6 is devtoolset-2.1. Here is a summary of the Linux @ CERN

CentOS 6 / SL 6

Developer Toolset 2.1 provides following tools:
  • gcc/g++/gfortran - GNU Compiler Collection - version 4.8.2
  • gdb - GNU Debugger - version 7.6.34
  • binutils - A GNU collection of binary utilities - version 2.23.52
  • elfutils - A collection of utilities and DSOs to handle compiled objects - version 0.155
  • dwz - DWARF optimization and duplicate removal tool - version 0.11
  • systemtap - Programmable system-wide instrumentation system - version 2.1
  • valgrind - Tool for finding memory management bugs in programs - version 3.8.1
  • oprofile - System wide profiler - version 0.9.8
  • eclipse - An Integrated Development Environment - version 4.3.1 (Kepler)

CentOS 5 / SL 5

Developer Toolset 1.1 provides following tools:
  • gcc/g++/gfortran - GNU Compiler Collection - version 4.7.2
  • gdb - GNU Debugger - version 7.5
  • binutils - A GNU collection of binary utilities - version 2.23.51
  • elfutils - A collection of utilities and DSOs to handle compiled objects - version 0.154
  • dwz - DWARF optimization and duplicate removal tool - version 0.7
  • systemtap - Programmable system-wide instrumentation system - version 1.8
  • valgrind - Tool for finding memory management bugs in programs - version 3.8.1
  • oprofile - System wide profiler - version 0.9.7

Installation and Enablement

CentOS 6 / SL 6
Save repository information as /etc/yum.repos.d/slc6-devtoolset.repo on your system:
# cd /etc/yum.repos.d/ 
# wget -O /etc/yum.repos.d/slc6-devtoolset.repo http://linuxsoft.cern.ch/cern/devtoolset/slc6-devtoolset.repo
# yum install devtoolset-2 --nogpgcheck
# scl enable devtoolset-2 bash

CentOS 5 / SL 5
Save repository information as /etc/yum.repos.d/slc5-devtoolset.repo on your system:
# cd /etc/yum.repos.d/
# wget -O /etc/yum.repos.d/slc5-devtoolset.repo http://linuxsoft.cern.ch/cern/devtoolset/slc5-devtoolset.repo
# yum install devtoolset-1.1
# scl enable devtoolset-1.1 bash

Monday, November 17, 2014

Comparing the Security Policies for Session Sharing in VNC, NoMachine, NX, EoD and FastX

This white Papers Comparing the Security Policies for Session Sharing in VNC, NoMachine, NX, EoD and FastX was written by StarNet Communications 



Executive Summary 
Session sharing is the process where multiple users interact with the same desktop from remote systems. Security is a major issue in session sharing software as by its very nature shared sessions work around policy rules enforced by the operating system. However, the collaborative benefit of session sharing make it a valuable in modern day companies. Special care needs to be taken by session sharing software vendors to make a shared session as secure as it possibly can be as to limit the amount of damage, a mismanaged session can cause to an organization. There are currently five major session sharing software tools available for linux systems: VNC, NoMachine, NX, Exceed on Demand, and FastX.

VNC offers minimal security and its use is a major security hole to an organization. NX is the widely used predecessor to NoMachine which has a flawed default configuration granting clients unneeded access. NoMachine offers better security, but it has several features that can be exploited. Exceed on Demand is fairly secure, but its use of an access control list that retains client permissions can be exploited to spy on the session owner. FastX offers the best security allowing session sharing to be dynamically enabled/disabled as well as the use of a one time sharing key that disables sharing whenever the owner disconnects.


Sunday, November 16, 2014

Error Problem Connecting for XRDP

After yum install xrdp and starting the service, I encountered the error during remote desktop to the Linux Box.

connecting to sesman ip 127.0.0.1 port 3350
sesman connect ok
sending login info to session manager, please wait...
xrdp_mm_process_login_reponse: login successful for display
started connecting
connecting to 127.0.0.1 5910
error - problem connecting

At the /var/log/xrdp-sesman.log
......
[20141118-23:53:40] [ERROR] X server for display 10 startup timeout
[20141118-23:53:40] [INFO ] starting xrdp-sessvc - xpid=2998 - wmpid=2997
[20141118-23:53:40] [ERROR] X server for display 10 startup timeout
[20141118-23:53:40] [ERROR] another Xserver is already active on display 10
[20141118-23:53:40] [DEBUG] aborting connection...
[20141118-23:53:40] [INFO ] ++ terminated session:  username root, display :10.0
..... 

I have installed the necessary GNOME Desktop packages. Installing GNOME Desktop on CentOS 6 on a console before installing xrdp.

But the solution is quite simple. You need to install the tigervnc-server package and just the tigervnc only.
# yum install tigervnc-server

Restart the xrdp again.
# service xrdp restart

Friday, November 14, 2014

Wednesday, November 12, 2014

Red Hat Enterprise Linux Atomic Host Beta Now Available


Red Hat Enterprise Linux 7 Atomic Host is a secure, lightweight and minimized footprint operating system that is optimized to run Linux Containers. A member of the Red Hat Enterprise Linux family, Red Hat Enterprise Linux Atomic Host couples the flexible, lightweight and modular capabilities of Linux Containers with the reliability and security of Red Hat Enterprise Linux in a reduced image size.

Red Hat Enterprise Linux Atomic Host is now ready to download and test; please share your feedback with Red Hat as you work through the testing process.

Features (According to the Website):
  1. Optimised for Containers
    Deploy a secure, integrated host platform that is designed to run container images with optimizations for scalability, density, and performance.
  2. Building and Running of Containers
    Build and run image-based containers using the docker service, accessible through the Extras channel as part of a Red Hat Enterprise Linux Server subscription.
  3. Orchestration
    Build composite applications by orchestrating multiple containers as microservices on a single host instance using the Kubernetes orchestration framework.
  4. Ability to Run Red Hat Enterprise Linux Platform Images
    Deploy applications that have been developed, tested and certified for Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 in a container on Red Hat Enterprise Linux Atomic Host Beta.
  5. Atomic Updating and Rollback
    A new, simplified update mechanism for host OS lets you download and deploy updated versions in a single step. With built-in retention of a previous version of the host OS, you can easily rollback to an earlier state.
  6. Security
    Secure and isolate applications with SELinux in containers, reducing potential attack surfaces and ensuring that if a container process goes down or is compromised, other applications and the host remain safe and functional.
  7. Flexibility to Deploy Across the Open Hybrid Cloud
    Deploy Red Hat Enterprise Linux 7 Atomic Host Beta to physical, virtual and public and private cloud environments, including Amazon Web Services and Google Compute Engine.

Wednesday, November 5, 2014

NTU Scales up with Hybrid Cloud with NetApp

News Information regarding NTU scaling up with Hybrid Cloud. The same article was presented in various sites.
  1. NTU scales up with Hybrid Cloud  (Computerworld Singapore)
  2. NTU scales up with Hybrid Cloud (CIO Asia)
  3. NTU scales up with Hybrid Cloud (MIS Asia)

Friday, October 31, 2014

Platform LSF – Working with Hosts (bhost, lsload)

Taken from LSF Platform Administrative Guide. The Document on bhost and lsload and more information can be taken from Platform - Working with hosts. Although your version of LSF may be different, but the commands can be still use.

Here are some excerpts.....

Host status Host status describes the ability of a host to accept and run batch jobs in terms of daemon states, load levels, and administrative controls. The bhosts and lsload commands display host status.    

1. bhosts Displays the current status of the host
STATUS DESCRIPTION
ok Host is available to accept and run new batch jobs
unavail Host is down, or LIM and sbatchd are unreachable.
unreach LIM is running but sbatchd is unreachable.
closed Host will not accept new jobs. Use bhosts -l to display the reasons.
unlicensed Host does not have a valid license.


2. bhosts -l Displays the closed reasons. A closed host does not accept new batch jobs:
$ bhosts -l
HOST  node001
STATUS           CPUF  JL/U    MAX  NJOBS    RUN  SSUSP  USUSP    RSV DISPATCH_WINDOW
closed_Adm      60.00     -     16      0      0      0      0      0      -

CURRENT LOAD USED FOR SCHEDULING:
r15s   r1m  r15m    ut    pg    io   ls    it   tmp   swp   mem   root maxroot
Total           0.0   0.0   0.0    0%   0.0     0    0 28656  324G   16G   60G  3e+05   4e+05
Reserved        0.0   0.0   0.0    0%   0.0     0    0     0    0M    0M    0M    0.0     0.0

processes clockskew netcard iptotal  cpuhz cachesize diskvolume
Total             404.0       0.0     2.0     2.0 1200.0     2e+04      5e+05
Reserved            0.0       0.0     0.0     0.0    0.0       0.0        0.0

processesroot   ipmi powerconsumption ambienttemp cputemp
Total                 396.0   -1.0             -1.0        -1.0    -1.0
Reserved                0.0    0.0              0.0         0.0     0.0


aa_r aa_r_dy aa_dy_p aa_r_ad aa_r_hpc fluentall fluent fluent_nox
Total         17.0    25.0   128.0    10.0    272.0      48.0   48.0       50.0
Reserved       0.0     0.0     0.0     0.0      0.0       0.0    0.0        0.0

gambit geom_trans tgrid fluent_par
Total           50.0       50.0  50.0      193.0
Reserved         0.0        0.0   0.0        0.0


3. bhosts -X Condensed host groups in an condensed format
$ bhosts -X
HOST_NAME          STATUS       JL/U    MAX  NJOBS    RUN  SSUSP  USUSP    RSV
comp027            ok              -     16      0      0      0      0      0
comp028            ok              -     16      0      0      0      0      0
comp029            ok              -     16      0      0      0      0      0
comp030            ok              -     16      0      0      0      0      0
comp031            ok              -     16      0      0      0      0      0
comp032            ok              -     16      0      0      0      0      0
comp033            ok              -     16      0      0      0      0      0


4. bhosts -l hostID Display all information about specific server host such as the CPU factor and the load thresholds to start, suspend, and resume jobs
# bhosts -l comp067
HOST  comp067
STATUS           CPUF  JL/U    MAX  NJOBS    RUN  SSUSP  USUSP    RSV DISPATCH_WINDOW
ok              60.00     -     16      0      0      0      0      0      -

CURRENT LOAD USED FOR SCHEDULING:
r15s   r1m  r15m    ut    pg    io   ls    it   tmp   swp   mem   root maxroot
Total           0.0   0.0   0.0    0%   0.0     0    0 13032  324G   16G   60G  3e+05   4e+05
Reserved        0.0   0.0   0.0    0%   0.0     0    0     0    0M    0M    0M    0.0     0.0

processes clockskew netcard iptotal  cpuhz cachesize diskvolume
Total             406.0       0.0     2.0     2.0 1200.0     2e+04      5e+05
Reserved            0.0       0.0     0.0     0.0    0.0       0.0        0.0

processesroot   ipmi powerconsumption ambienttemp cputemp
Total                 399.0   -1.0             -1.0        -1.0    -1.0
Reserved                0.0    0.0              0.0         0.0     0.0

aa_r aa_r_dy aa_dy_p aa_r_ad aa_r_hpc fluentall fluent fluent_nox
Total         18.0    25.0   128.0    10.0    272.0      47.0   47.0       50.0
Reserved       0.0     0.0     0.0     0.0      0.0       0.0    0.0        0.0

gambit geom_trans tgrid fluent_par
Total           50.0       50.0  50.0      193.0
Reserved         0.0        0.0   0.0        0.0

LOAD THRESHOLD USED FOR SCHEDULING:
r15s   r1m  r15m   ut      pg    io   ls    it    tmp    swp    mem
loadSched   -     -     -     -       -     -    -     -     -      -      -
loadStop    -     -     -     -       -     -    -     -     -      -      -

root maxroot processes clockskew netcard iptotal   cpuhz cachesize
loadSched     -       -         -         -       -       -       -         -
loadStop      -       -         -         -       -       -       -         -

diskvolume processesroot    ipmi powerconsumption ambienttemp cputemp
loadSched        -             -       -                -           -       -
loadStop         -             -       -                -           -       -


5. lsload Displays the current state of the host:
STATUS DESCRIPTION
ok Host is available to accept and run batch jobs and remote tasks.
-ok LIM is running but RES is unreachable.
busy Does not affect batch jobs, only used for remote task placement (i.e., lsrun). The value of a load index exceeded a threshold (configured in lsf.cluster.cluster_name, displayed by lshosts -l). Indices that exceed thresholds are identified with an asterisk (*).
lockW Does not affect batch jobs, only used for remote task placement (i.e., lsrun). Host is locked by a run window (configured in lsf.cluster.cluster_name, displayed by lshosts -l).
lockU Will not accept new batch jobs or remote tasks. An LSF administrator or root explicitly locked the host using lsadmin limlock, or an exclusive batch job (bsub -x) is running on the host. Running jobs are not affected. Use lsadmin limunlock to unlock LIM on the local host.
unavail Host is down, or LIM is unavailable.
unlicensed The host does not have a valid license.


6. References:
  1. Platform - Working with hosts

Thursday, October 30, 2014

killing all the processes belonging to a single user

IF you need to kill all the processes belonging to a user, you may want to consider this command which

# pkill -u user

Alternatively, you can log on as the user whom you wish to eliminate his/her jobs, you can use the command. Remember to logon as the person and not as root or you will kill your processes
$ kill -9 -l

Wednesday, October 29, 2014

Unable to boot HP Elitebook 2730p with USB CD-ROM



If you are using old HP Elitebook 2730p and after atttaching a USB-powered DVD-ROM, but somehow the BIOS is not able to recognize the USB DVD-ROM, first thing first, go

Step 1: Go to the HP Elitebook 2730p Drivers & Software

Step 2: Download the ROMPaq for HP Notebook System BIOS (68POU) - FreeDOS Bootable Media (International). Apparently the original BIOS has some bug which cause issue in booting with USB DVD-ROM

Step 3: Use a Thumb Drive 2GB and below and insert into your PC USB Drive. Format in FAT and run the sp50060.exe. This will flash and update the BIOS.

Step 4: Boot with the USB DVD-ROM, you can install any OS..... :)

Tuesday, October 28, 2014

Common Administrative Commands for RHEL and CentOS 5,6,7

This Common Administrative Commands Poster from Red Hat for RHEL and CentOS 5,6,7  is something l really appreciate as a system administrator. Read it for yourself and you will see what it meant. This is done by Red Hat

  1. RHEL 5 6 7 Administrative Commands Cheatsheet

How to do setup auto-support for NetApp DataOnTap


This is taken from Data ONTAP 8.1 System Administration Guide for Cluster-Mode Guide Page 142. See attached for the document How to setup AutoSupport (pdf)

Monday, October 27, 2014

Data OnTap 7-Mode to Cluster-Mode Command Map


If you have been using Data OnTap 7-Mode but if you need the equivalent for Cluster-Mode. Do look at this pdf for the mapping. You will find it very useful.

For more information, do take a look at Data OnTap 7-Mode to Cluster-Mode Command Map

Sunday, October 26, 2014

The Spice Project


Taken from Spice Project Site

The Spice project aims to provide a complete open source solution for interaction with virtualized desktop devices.The Spice project deals with both the virtualized devices and the front-end. Interaction between front-end and back-end is done using VD-Interfaces. The VD-Interfaces (VDI) enable both ends of the solution to be easily utilized by a third-party component. ces (VDI) enable both ends of the solution to be easily utilized by a third-party component.



The Spice project plans to provide additional solutions, including:
  1. Remote access for a physical machine
  2. VM front-end for local users (i.e., render on and share devices of the same physical machine)
Downloads:
  1.  Client Downloads

Friday, October 17, 2014

Protecting Servers from SSLv3 "POODLE" Vulnerability

The Secure Sockets Layer version 3.0 is an old version of security technology for establishing an encrypted link between a server and a client.

A vulnerability, known as POODLE ("Padding Oracle On Downgraded Legacy Encryption"), was reported in this SSLv3. An attacker can exploit this vulnerability to obtain users’ cookies and compromise users’ accounts.

This vulnerability has been assigned a CVE number: CVE-2014-3566. For more information, do take a look at Security Vulnerability Alert: POODLE SSLv3.0 vulnerability

Web system owners are also advised to disable SSLv3 and enable TLS_FALLBACK_SCSV to maintain interoperability.


Do take a look at How To Protect your Server Against the POODLE SSLv3 Vulnerability on how to protect your servers from SSLv3 "POODLE" Vulnerability


Step 1. I would like to highlight the CentOS / Red Hat variety in
# vim /etc/httpd/conf.d/ssl.conf

Step 2. Find the SSLProtocol Directives,
SSLProtocol all -SSLv3 -SSLv2

Step 3. Restart the httpd services
# service httpd restart

References
  1.  How To Protect your Server Against the POODLE SSLv3 Vulnerability
  2. Apache - SSLProtocol Directive

Tools to speed up kernel crash hang analysis with the kernel log

This is a summaries article taken from RHEL6: Speeding up kernel crash / hang analysis with the kernel log. When there is a kernel crash or hang, there is often a very large file is produced containing a memory dump of the entire system called a vmcore. Analysis of the kernel crash or hang often requires this large file be uploaded to Red Hat for analysis (if you have subscription)  

 For RHEL 6.4 and above Starting with RHEL 6.4, Starting with Red Hat Enterprise Linux 6.4 and kexec-tools-2.0.0-258.el6, the kdump process will dump the kernel log to a file called vmcore-dmesg.txt before creating the vmcore file.
# ls /var/crash/127.0.0.1-2012-11-21-09\:49\:25/
vmcore  vmcore-dmesg.txt
# cp /var/crash/127.0.0.1-2012-11-21-09\:49\:25/vmcore-dmesg.txt /tmp/00123456-vmcore-dmesg.txt

For RHEL 6.0 to RHEL 6.3, 
Do take a look at Speeding up kernel crash hang analysis with the kernel log

Thursday, October 16, 2014

Leaked Dropbox Password

Taken from SINGCERT

Online reports have revealed that some Dropbox accounts have been compromised. According to Dropbox’s media statement, the usernames and passwords were stolen from other services and they have since reset the "small number" of affected accounts.

  • Change your Dropbox passwords as soon as possible. If other accounts share the same password as your Dropbox account, it's recommended to change the passwords of those accounts as well.
  • Enable 2-factor authentication (2FA) for your Dropbox account. For more information on enabling 2FA in Dropbox, please refer to https://www.dropbox.com/help/363
  • Be selective of using your Dropbox account to sign in to third party services.
References
https://www.singcert.org.sg/alerts/21-latest/630-singcert-leaked-dropbox-passwords
http://www.cnet.com/news/hackers-hold-7-million-dropbox-passwords-ransom/
http://www.zdnet.com/dropbox-blames-other-services-for-claimed-7-million-password-hack-7000034629/
http://thenextweb.com/apps/2014/10/14/dropbox-passwords-leak-online-alleged-hack/


Wednesday, October 15, 2014

Security Vulnerability Alert: POODLE SSLv3.0 vulnerability

Description:
On 14/10, Google researchers had release a vulnerability in SSL 3.0, which could allowed malicious user to decrypt the contents that was supposedly encrypted when visiting SSL enabled websites.  Named POODLE attack ( Padding Oracle on Downgraded Legacy Encryption), a padding attack that targets CBC ciphers in SSL V3.

A detail analysis report of the POODLE exploit by the Google researchers can be found here: https://www.openssl.org/~bodo/ssl-poodle.pdf

Impact
Websites that support SSL V3.0 and CBC cipher mode chaining are vulnerable to the attacks, According to the report, The flaw allows attackers to steal secure HTTP cookies and headers, among other sensitive data.

Mitigation
  • Google researchers recommend that support for SSL v3.0 be disable either on the end user browser or server end or both as well as others that rely on downgraded connections ( Warning : Doing this may “break” connectivity to web applications that only able to support up to SSL V3.0  and don’t support TLS 1.0, TLS 1.1, TLS 1.2 )
  • If the above is not possible, Google recommends implementing support of “TLS FALLBACK SCSV” the Transport Layer Security Signalling Cipher Suite Value that "prevents protocol downgrade attacks." https://tools.ietf.org/html/draft-ietf-tls-downgrade-scsv-00

    “This is a mechanism that solves the problems caused by retrying failed connections and thus prevents attackers from inducing browsers to use SSL 3.0. It also prevents downgrades from TLS 1.2 to 1.1 or 1.0 and so may help prevent future attacks," explained Möller.”
More Information
  1. http://thenextweb.com/google/2014/10/15/web-encryption-vulnerability-opens-encrypted-data-hackers/
  2. http://googleonlinesecurity.blogspot.sg/2014/10/this-poodle-bites-exploiting-ssl-30.html
  3. http://blog.erratasec.com/2014/10/some-poodle-notes.html
  4. http://www.theregister.co.uk/2014/10/14/google_drops_ssl_30_poodle_vulnerability/
  5. Mozilla Blog - https://blog.Mozilla.org/security/2014/10/14/the-poodle-attack-and-the-end-of-ssl-3-0/
  6. Microsoft - Disabling SSL 3.0 on Servers - http://support.Microsoft.com/kb/187498
  7. Mozilla Add-On - Disabling SSL 3.0 on Mozilla Browser - https://addons.mozilla.org/en-US/firefox/addon/ssl-version-control/

Friday, October 10, 2014

Deploying HAProxy 1.4.24 to load-balance MS Terminal Services on CentOS 6

HAProxy is an open source, free, veryfast and reliable solution offering high availability, load balancing and proxy for TCP and HTTP-based applications. It is particularly suited for very high traffic web sites and powers quite a number of the world's most visited ones. Over the years it has become the de-facto standard opensource load balancer, is now shipped with most mainstream Linux distributions, and is often deployed by default in cloud platforms.

The content of this blog entry is taken from Load balancing Windows Terminal Server – HAProxy and RDP Cookies or Microsoft Connection Broker

 In this blog entry, we will put in a sample working haproxy configuration to load balance between terminal services  

 Step 1: Install haproxy
# yum install haproxy

Step 2: Modify /etc/haproxy/haproxy.cfg  
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2

chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4500
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
timeout queue 1m
timeout connect 60m
timeout client 60m
timeout server 60m

# -------------------------------------------------------------------
# [RDP Site Configuration]
# -------------------------------------------------------------------
listen cattail 155.69.57.11:3389
mode tcp
tcp-request inspect-delay 5s
tcp-request content accept if RDP_COOKIE
persist rdp-cookie
balance leastconn
option tcpka
option tcplog
server win2k8-1 192.168.6.48:3389 weight 1 check inter 2000 rise 2 fall 3
server win2k8-2 192.168.6.47:3389 weight 1 check inter 2000 rise 2 fall 3
option redispatch

listen stats :1936
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /

Information:
  • timeout client and timeout server is put at 6 hours (360m) to keep idle RDP session established
  • persist rdp-cookie and balance rdp-cookie. These instruct HAProxy to inspect the incoming RDP connection for a cookie; if one is found, it is used to persistently direct the connection to the correct real server
  • The 2 tcp-request lines help to ensure that HAProxy sees the cookie on the initial request.
  Reference:

Friday, October 3, 2014

VMWARE had release product updates to address the BASH security vulnerabilities

VMWARE had release product updates to address the BASH security vulnerabilities on 01/10/14 .

It is found at http://www.vmware.com/security/advisories/VMSA-2014-0010.html

Reports have shown using their honeypots system  that Malicious individuals are currently actively scanning for vulnerable and un-patch system and what commands they are attempting  to execute by simply passing URL/command  parameters.

https://www.alienvault.com/open-threat-exchange/blog/attackers-exploiting-shell-shock-cve-2014-6721-in-the-wild
http://blog.sucuri.net/2014/09/bash-shellshocker-attacks-increase-in-the-wild-day-1.html

Thursday, October 2, 2014

Unable to open socket connection to xcatd daemon on localhost:3001.

When I did a tabedit site to check my configuration, I encountered this error
Unable to open socket connection to xcatd daemon on localhost:3001.
Verify that the xcatd daemon is running and that your SSL setup is correct.

The solution to this error is quite easy. You just need to check your /etc/hosts. Have you # out your. In other words, make sure you have a line like this in your /etc/hosts
127.0.0.1       localhost.localdomain                   localhost

That's it.....

Wednesday, October 1, 2014

Compiling VASP 5.3.5 with OpenMPI 1.6.5 and Intel 12.1.5

Just compiled VASP 5.3.5 with OpenMPI 1.6.5 and Intel 12.1.5. Do take a look at Compiling VASP 5.3.5 with OpenMPI 1.6.5 and Intel 12.1.5

Tuesday, September 30, 2014

Listing and cleaning out old xauth entries on CentOS 5

When we do a X-forwarding,

$ ssh -X somehost.com

and you if you do a xauth list to check on your X-forwarding session, you can see an xauth entry something like:

$ xauth list

..... 
current-local-server:17  MIT-MAGIC-COOKIE-1  395f7b22fb6087a29b5fb1c9e37577c0
.....

Somehow after exiting the X-forwarding, somehow the session is still found in the xauth list

To clear the xauth entries, you can take a look at Clean up old xauth entries

In that blog entries, the author,
$ xauth list | cut -f1 -d\  | xargs -i xauth remove {}

Friday, September 26, 2014

Critical Security Vulnerability: Bash Code Injection Vulnerability, aka Shellshock (CVE-2014-6271)

A critical vulnerability in the Bourne again shell commonly known as Bash that is  present in most Linux and UNIX distributions as well as Apple’s Mac OS X, had been found and administrators are being urged to patch and remediate immediately. Do read https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/

The flaw discovered allows an attacker to remotely attach a malicious executable to a variable that is executed when Bash is invoked. 

Operating systems with updates include:
CentOS
Debian
Redhat
More info: https://access.redhat.com/articles/1200223

Proof-of-concept code for exploiting Bash-using CGI scripts to run code with the same privileges as the web server is already floating around the web. A simple Wget fetch can trigger the bug on a vulnerable system.

http://www.theregister.co.uk/2014/09/24/bash_shell_vuln/
http://www.wordfence.com/blog/2014/09/major-bash-vulnerability-disclosed-may-affect-a-large-number-of-websites-and-web-apps/

Diagnostic Steps
To test if your version of Bash is vulnerable to this issue, run the following command:
$ env x='() { :;}; echo vulnerable'  bash -c "echo this is a test"
If the output of the above command looks as follows:
vulnerable
this is a test

If you are using a vulnerable version of Bash. The patch used to fix this issue ensures that no code is allowed after the end of a Bash function. Thus, if you run the above example with the patched version of Bash, you should get an output similar to:
$ env x='() { :;}; echo vulnerable'  bash -c "echo this is a test"
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
this is a test

Wednesday, September 24, 2014

haproxy unable to bind socket


After confguring haproxy and when you start the haproxy services as you can find in
Install and Configure HAProxy on CentOS/RHEL 5/6, you might encounter the following error.
Starting haproxy: [WARNING] 265/233231 (20638) : config : log format ignored for proxy 'load-balancer-node' since it has no log address.
[ALERT] 265/233231 (20638) : Starting proxy load-balancer-node: cannot bind socket

To check whether what other services are listening to the port, do the following
# netstat -anop | grep ":3389"
tcp        0      0 0.0.0.0:3389                0.0.0.0:*                   LISTEN      20606/xrdp          off (0.00/0/0)

Stop the listening services
# service xrdp stop

Start the haproxy service
# service haproxy


You should not encounter any error now.                                                           [  OK  ]

Tuesday, September 23, 2014

Centrify Error - Not authenticated: while getting service credentials: No credentials found with supported encryption


I was not able to use authenticate with my password when I tried to logon with Putty. A closer look at the log file shows. Only the local account root was able to logon
Sep 17 12:00:00 node1 sshd[4725]: error: PAM: 
Authentication failure for user2 from 192.168.1.5
Sep 17 12:00:01 node1 adclient[7052]: WARN  audit User 'user2' not authenticated: 
while getting service credentials: 
No credentials found with supported encryption

The solution was very simple. Just restart the /etc/init.d/centrifydc and /etc/init.d/centrify-sshd
# service /etc/init.d/centrifydc restart
# service /etc/init.d/centrify-sshd restart


Sunday, September 21, 2014

Installing dokuwiki on CentOS 6

This writeup is a modification from Installing dokuwiki on CentOS   Step 1: Get the latest dokuwiki from http://download.dokuwiki.org/
# wget http://download.dokuwiki.org/src/dokuwiki/dokuwiki-stable.tgz
# tar -xzvf dokuwiki-stable.tgz
Step 2: Move dokuwiki files to apache directory
# mv dokuwiki-stable /var/www/html/docuwiki
Step 3: Set Ownership and Permission for dokuwiki
# chown -R apache:root /var/www/html/dokuwiki
# chmod -R 664 /var/www/html/dokuwiki/
# find /var/www/html/dokuwiki/ -type d -exec chmod 775 {} \;
Step 4: Continue the installation http://192.168.1.1/docuwiki/install.php Ignore the security warning, we can only move the data directory after installing. fill out form and click save Step 5: Delete install.php for security
# rm /var/www/html/dokuwiki/install.php
Step 6: Create and move data, bin (CLI) and cond directories out of apache directories for security Assuming apache does not access /var/www, only /var/www/html and /var/cgi-bin secure dokuwiki (or use different directory):
# mkdir /var/www/dokudata
# mv /var/www/html/dokuwiki/data/ /var/www/dokudata/
# mv /var/www/html/dokuwiki/conf/ /var/www/dokudata/
# mv /var/www/html/dokuwiki/bin/ /var/www/dokudata/
Step 7: Update dokuwiki where the conf directory
# vim /var/www/html/dokuwiki/inc/preload.php
<?php
// DO NOT use a closing php tag. This causes a problem with the feeds,
// among other things. For more information on this issue, please see:
// http://www.dokuwiki.org/devel:coding_style#php_closing_tags

define('DOKU_CONF','/var/www/dokudata/conf/');
* Note the comments why there is no closing php Step 8: Update dokuwiki where the data directory is
# vim /var/www/dokudata/conf/local.php
$conf['savedir'] = '/var/www/dokudata/data/';
Step 9: Set permission for dokuwiki again for the new directory with same permissions
# chown -R apache:root /var/www/html/dokuwiki
# chmod -R 664 /var/www/html/dokuwiki/
# find /var/www/html/dokuwiki/ -type d -exec chmod 775 {} \;

# chown -R apache:root /var/www/dokudata
# chmod -R 664 /var/www/dokudata/
# find /var/www/dokudata/ -type d -exec chmod 775 {} \;
  Step 10: Go to wiki http://192.168.1.1/docuwiki/install.php