Tuesday, September 30, 2014

Listing and cleaning out old xauth entries on CentOS 5

When we do a X-forwarding,

$ ssh -X somehost.com

and you if you do a xauth list to check on your X-forwarding session, you can see an xauth entry something like:

$ xauth list

current-local-server:17  MIT-MAGIC-COOKIE-1  395f7b22fb6087a29b5fb1c9e37577c0

Somehow after exiting the X-forwarding, somehow the session is still found in the xauth list

To clear the xauth entries, you can take a look at Clean up old xauth entries

In that blog entries, the author,
$ xauth list | cut -f1 -d\  | xargs -i xauth remove {}

Friday, September 26, 2014

Critical Security Vulnerability: Bash Code Injection Vulnerability, aka Shellshock (CVE-2014-6271)

A critical vulnerability in the Bourne again shell commonly known as Bash that is  present in most Linux and UNIX distributions as well as Apple’s Mac OS X, had been found and administrators are being urged to patch and remediate immediately. Do read https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/

The flaw discovered allows an attacker to remotely attach a malicious executable to a variable that is executed when Bash is invoked. 

Operating systems with updates include:
More info: https://access.redhat.com/articles/1200223

Proof-of-concept code for exploiting Bash-using CGI scripts to run code with the same privileges as the web server is already floating around the web. A simple Wget fetch can trigger the bug on a vulnerable system.


Diagnostic Steps
To test if your version of Bash is vulnerable to this issue, run the following command:
$ env x='() { :;}; echo vulnerable'  bash -c "echo this is a test"
If the output of the above command looks as follows:
this is a test

If you are using a vulnerable version of Bash. The patch used to fix this issue ensures that no code is allowed after the end of a Bash function. Thus, if you run the above example with the patched version of Bash, you should get an output similar to:
$ env x='() { :;}; echo vulnerable'  bash -c "echo this is a test"
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
this is a test

Wednesday, September 24, 2014

haproxy unable to bind socket

After confguring haproxy and when you start the haproxy services as you can find in
Install and Configure HAProxy on CentOS/RHEL 5/6, you might encounter the following error.
Starting haproxy: [WARNING] 265/233231 (20638) : config : log format ignored for proxy 'load-balancer-node' since it has no log address.
[ALERT] 265/233231 (20638) : Starting proxy load-balancer-node: cannot bind socket

To check whether what other services are listening to the port, do the following
# netstat -anop | grep ":3389"
tcp        0      0      *                   LISTEN      20606/xrdp          off (0.00/0/0)

Stop the listening services
# service xrdp stop

Start the haproxy service
# service haproxy

You should not encounter any error now.                                                           [  OK  ]

Tuesday, September 23, 2014

Centrify Error - Not authenticated: while getting service credentials: No credentials found with supported encryption

I was not able to use authenticate with my password when I tried to logon with Putty. A closer look at the log file shows. Only the local account root was able to logon
Sep 17 12:00:00 node1 sshd[4725]: error: PAM: 
Authentication failure for user2 from
Sep 17 12:00:01 node1 adclient[7052]: WARN  audit User 'user2' not authenticated: 
while getting service credentials: 
No credentials found with supported encryption

The solution was very simple. Just restart the /etc/init.d/centrifydc and /etc/init.d/centrify-sshd
# service /etc/init.d/centrifydc restart
# service /etc/init.d/centrify-sshd restart

Sunday, September 21, 2014

Installing dokuwiki on CentOS 6

This writeup is a modification from Installing dokuwiki on CentOS   Step 1: Get the latest dokuwiki from http://download.dokuwiki.org/
# wget http://download.dokuwiki.org/src/dokuwiki/dokuwiki-stable.tgz
# tar -xzvf dokuwiki-stable.tgz
Step 2: Move dokuwiki files to apache directory
# mv dokuwiki-stable /var/www/html/docuwiki
Step 3: Set Ownership and Permission for dokuwiki
# chown -R apache:root /var/www/html/dokuwiki
# chmod -R 664 /var/www/html/dokuwiki/
# find /var/www/html/dokuwiki/ -type d -exec chmod 775 {} \;
Step 4: Continue the installation Ignore the security warning, we can only move the data directory after installing. fill out form and click save Step 5: Delete install.php for security
# rm /var/www/html/dokuwiki/install.php
Step 6: Create and move data, bin (CLI) and cond directories out of apache directories for security Assuming apache does not access /var/www, only /var/www/html and /var/cgi-bin secure dokuwiki (or use different directory):
# mkdir /var/www/dokudata
# mv /var/www/html/dokuwiki/data/ /var/www/dokudata/
# mv /var/www/html/dokuwiki/conf/ /var/www/dokudata/
# mv /var/www/html/dokuwiki/bin/ /var/www/dokudata/
Step 7: Update dokuwiki where the conf directory
# vim /var/www/html/dokuwiki/inc/preload.php
// DO NOT use a closing php tag. This causes a problem with the feeds,
// among other things. For more information on this issue, please see:
// http://www.dokuwiki.org/devel:coding_style#php_closing_tags

* Note the comments why there is no closing php Step 8: Update dokuwiki where the data directory is
# vim /var/www/dokudata/conf/local.php
$conf['savedir'] = '/var/www/dokudata/data/';
Step 9: Set permission for dokuwiki again for the new directory with same permissions
# chown -R apache:root /var/www/html/dokuwiki
# chmod -R 664 /var/www/html/dokuwiki/
# find /var/www/html/dokuwiki/ -type d -exec chmod 775 {} \;

# chown -R apache:root /var/www/dokudata
# chmod -R 664 /var/www/dokudata/
# find /var/www/dokudata/ -type d -exec chmod 775 {} \;
  Step 10: Go to wiki

Thursday, September 18, 2014

Red Hat Video and Articles on SystemD for Red Hat Enterprise Linux 7

Systemd, now available in Red Hat Enterprise Linux 7, offers you shorter system startup, refined control for process startup and management, and enhanced logging through journald. Learn more about systemd and how to get started.

Do take a look at the video clips and articles offered by Red Hat Enterprise Linux. All the information can be found at Starting with systemd

Systemd Startup
Working with Systemd targets
Enabling services at runtime
Converting init scripts to systemd units
Managing services with systemd
Shutting down and hibernating the system
Controlling Systems on a remote system

and more.......

Wednesday, September 17, 2014

Adding SVG MIME Type to Apache on CentOS

What is MIME?

According to www.w3.org/services/svg-server

MIME Types (sometimes referred to as "Internet media types") are the primary method to indicate the type of resources delivered via MIME-aware protocols such as HTTP and email. User agents (such as browsers) use media types to determine whether that user agent supports that specific format, and how the content should be processed. When an SVG document is not served with the correct MIME Type in the Content-Type header, it might not work as intended by the author; for example, a browser might render the SVG document as plain text or provide a "save-as" dialog instead of rendering the image.

Step 1: To add SVG MIME as list of supported MIME Type, simply add these lines to your /etc/httpd/conf/httpd.conf. I have placed it at around line 786

# AddType allows you to add to or override the MIME configuration
# file mime.types for specific file types.
#AddType application/x-tar .tgz
AddType image/svg+xml svg svgz
AddEncoding gzip svg

Step 2: One more thing do ensure you have the following line at your /etc/mime.type
image/svg+xml svg svgz

Step 3. Remember to restart the Apache
# service httpd restart 

Tuesday, September 16, 2014

Creating a RAM Disk on CentOS 6

Do take a look at clear and easy-to-understand article on the Difference between ramfs and tmpfs and Create a Ram Disk in Linux for more detailed information. The information of this blog is taken from Create a Ram Disk in Linux

There are many reasons to create a RAM Disk. One reason is to have a isolated latency test or throughput test between interconnect, but discounting the effects of the spinning disk I/O that might be the bottleneck to the test. Another case is to store temp files which require very fast I/O. Nothing beats memory.

Step 1: Check how much RAM you have. Display it in GB
# free -g
[root@n01 ~]# free -m
             total       used       free     shared    buffers     cached
Mem:         50276       1219      49056          0         83        555
-/+ buffers/cache:        580      49695
Swap:        25207          0      25207
You can also display with -m (MB) or -k (KB)

Step 2: Create and Mount a RAM Disk
# mkdir /mnt/ramdisk
# mount -t tmpfs -o size=16g tmpfs /mnt/ramdisk

Step 3: If you wish to create automatic mount, do place it at /etc/fstab
tmpfs       /mnt/ramdisk tmpfs   nodev,nosuid,noexec,nodiratime,size=16g   0 0 

Wednesday, September 10, 2014

Accessing RAID Configuration in IBM x3650M2 or IBM x3550M2

I was trying to locate the RAID configuration in IBM x3650M2 or IBM x3550M2, In the older configuration, there is a LSI configuration utility where when you press Ctrl-C and you get to the WebBIOS

To locate the LSI configuration,
  1. Boot to the BIOS Setup
  2. System Settings
  3. Adapter and UEFI Drivers
  4. List All Drivers and Adapter
  5. LSI (Hit Enter and you will enter the WebBIOS)

Tuesday, September 9, 2014

Tracking NetApp Cluster-Mode Performance

To track NetApp Storage on Cluster Performance, do use the statistics "statistics show-periodic"
netapp-cluster1::> statistics show-periodic
cluster:summary: cluster.cluster: 9/9/2014 09:33:29
cpu    total                   data     data     data cluster  cluster  cluster     disk     disk
busy      ops  nfs-ops cifs-ops busy     recv     sent    busy     recv     sent     read    write
---- -------- -------- -------- ---- -------- -------- ------- -------- -------- -------- --------
5%      303      303        0   2%   4.86MB    223KB      0%   1.16MB   1.17MB    685KB    571KB
5%      312      312        0   3%   8.27MB    359KB      0%   1.11MB   1001KB    679KB   39.4KB
8%      300      300        0   2%   7.29MB    495KB      0%   2.87MB   3.30MB   2.66MB   59.1KB
6%      158      158        0   1%   3.53MB    168KB      0%   2.16MB   1.51MB   2.71MB   11.1MB
5%      184      184        0   2%   4.48MB   1.22MB      0%   1.99MB   1.97MB   1.21MB   10.9MB
5%      213      213        0   1%   2.82MB    222KB      0%    902KB    749KB    240KB    671KB
3%      144      144        0   1%   2.32MB    762KB      0%    559KB    685KB   96.6KB   15.8KB
4%      199      199        0   1%   3.73MB    881KB      0%    796KB    715KB    390KB   39.6KB
7%      164      164        0   1%   4.49MB    365KB      0%   2.34MB   2.43MB   2.52MB   8.33MB
7%      115      115        0   2%   4.07MB    154KB      0%   1.23MB   1.25MB   2.41MB   9.80MB
3%      224      224        0   1%   2.72MB    163KB      0%   1.80MB    721KB    407KB    996KB
4%      220      220        0   1%   4.38MB   1.32MB      0%    451KB   1.54MB    199KB    110KB
5%      124      124        0   1%   2.97MB    157KB      0%    315KB    273KB    251KB   15.8KB
7%      153      153        0   0%   1.76MB    139KB      0%    220KB    268KB   2.54MB   1.28MB
4%      120      120        0   0%   1.30MB   80.4KB      0%    417KB    325KB   2.86MB   13.9MB
cluster:summary: cluster.cluster: 9/9/2014 09:34:01
cpu    total                   data     data     data cluster  cluster  cluster     disk     disk
busy      ops  nfs-ops cifs-ops busy     recv     sent    busy     recv     sent     read    write
---- -------- -------- -------- ---- -------- -------- ------- -------- -------- -------- --------
3%      115      115        0   0%   1.30MB   80.4KB      0%    220KB    268KB   96.6KB   15.8KB
Averages for 15 samples:
5%      195      195        0   1%   3.93MB    451KB      0%   1.22MB   1.19MB   1.32MB   3.84MB
8%      312      312        0   3%   8.27MB   1.32MB      0%   2.87MB   3.30MB   2.86MB   13.9MB

Monday, September 8, 2014

Error when sourcing or using compilervars.csh/ippvars.csh - arch: Undefined variable

If you have the error when sourcing or using compilervars.csh/ippvars.csh. The errors are as followed:
$ cd /opt/intel/composer_xe_2013_sp1.2.144/bin
$ source ./compilervars.csh intel64
arch: Undefined variable.

According to the website, Error when using compilervars.csh/ippvars.csh - arch: Undefined variable. from Intel

Problem Description:

A defect exists in the Intel® Integrated Performance Primitives (IPP) 8.1 Initial release in the ippvars.csh file distributed for Linux* (found under: /opt/intel/composer_xe_2013_sp1/ipp).

The IPP 8.1 release (Package ID: l_ipp_8.1.0.144) is available as a stand-alone download or bundled with the Intel® Composer XE 2013 SP1 Update 2 release (Package id: l_ccompxe_2013_sp1.2.144) for customers with valid licenses from the Intel Registration Center.

The defect is caused by improper initialization an internal variable used within the script that leads to the error “arch: Undefined variable.” when the script is sourced directly or indirectly via the compilersvars.csh script (found under: /opt/intel/composer_xe_2013_sp1/bin).

This defect is fixed in the Intel® C++ Composer XE 2013 SP1 Update 3 Release (Package id: l_ccompxe_2013.1.3.174 - Version Build 20140422) now available from our Intel Registration Center.

In lieu of a permanent fix, users (or sys-admins) with appropriate root privileges can edit the ippvars.csh file to insert ONLY the new line 37 as noted in the code snippet below (ahead of line 38 which is the original line 37) to set the variable arch to the value of the first incoming argument (e.g. $1):
     37    set arch="$1"
     38    if ( "$1" == "ia32_intel64" ) then
     39     setenv arch intel64
     40    endif 

Saturday, September 6, 2014

Singaren - Launch of SLIX, the Singapore and the region's first 100Gbps High Speed R&E Network, 28 Aug 2014

SINGAPORE, 28 August 2014 – Singapore Advanced Research and Education Network (SingAREN) announced today the launch of SingAREN-Lightwave Internet Exchange (SLIX), the first 100Gbps community network to be set up in the Southeast Asia region.

With SLIX, Singapore’s Research and Education (R&E) community will gain seamless access to a super high speed network with a hundred times more capacity than before; and enjoy bandwidth fully dedicated to their use. Built on an optical fibre core comprising dark fibres, SLIX allows resiliency, future capacity upgrade, and technology-proof network connectivity.
The new network also opens up new possibilities as a test-bed, extending database mirroring services, bilateral disaster recovery, high performance computing federation and shared services, high volume peering for content data networks and other value-adding services to the R&E community. In addition, SLIX will also enable research organisations to test different protocols for interconnections such as the Infiniband; and optical network researchers to carry out their experiments.

“SingAREN is proud to be the first to launch a 100 Gbps research and education network in the region. By increasing the network speed by ten-fold and with our suite of value-added services, SingAREN aims to facilitate collaborations amongst our local research organisations and with their international counterparts,” said A/Prof Francis Lee Bu Sung, President of SingAREN. “We would like to thank A*STAR, NTU and NUS for working closely with us to realise this network.”

Funded by SingAREN and the National Research Foundation (NRF), SLIX is a collaboration and a network built between SingAREN, the Agency for Science, Technology and Research (A*STAR), the Nanyang Technological University (NTU) and the National University of Singapore (NUS).

SingAREN selected 3D Networks to build the first 100 Gbps research and education network in the region. 3D Networks has deployed a flexible and programmable Packet Optical Platform meeting the advanced requirements of global research collaborators, and capable of scaling up to 400Gbps and beyond. 3D Networks built the DWDM network with the Ciena (NYSE: CIEN)
6500 Converged Packet Optical solution, and with Ciena’s Network Operations Centre and Network Transformation Solutions team providing management and monitoring of the network. The solution is supplemented with Brocade’s Open Flow enabled equipment.
Bluetel Networks is the fibre cable provider for SLIX.

The Business Times:
The Straits Times:

Lianhe Zaobao:

CIO.com (Australia)

Computerworld Singapore