Saturday, March 30, 2013
Obtaining Microsoft TrueType fonts on Linux for CentOS 6
This article titled msttcorefonts on RHEL6 / Centos 6 / SL6 provides a tutorial on the obtaining the Microsoft TrueType fonts on Linux. Happy reading.
Tuesday, March 26, 2013
Configuring and Compiling PAM Module for Torque 2.5 Server
If you are compiling PAM Module for the Torque Server and Clients, do remember to add the following to the Torque 2.5 configuration file
./configure \ --prefix=/opt/torque \ --exec-prefix=/opt/torque/x86_64 \ --enable-docs \ --disable-gui \ --with-server-home=/var/spool/torque \ --enable-syslog \ --with-scp \ --disable-rpp \ --disable-spool \ --with-pamFor more information on how to compile Torque.
Monday, March 25, 2013
Hadoop Ecosystems Listing
A non-exhausted Hadoop Ecosystems are listed in the below. The list is taken from the nice read
Hadoop the definitive guide / Tom White.
Avro
A serialization system for efficient, cross-language RPC and persistent data storage.
1. Hadoop Distributed File System (HDFS)
A distributed filesystem that runs on large clusters of commodity machines.
2. Hive
A distributed data warehouse. Hive manages data stored in HDFS and provides a query language based on SQL (and which is translated by the runtime engine to MapReduce jobs) for querying the data.
3. HBase
A distributed, column-oriented database. HBase uses HDFS for its underlying storage, and supports both batch-style computations using MapReduce and point queries (random reads).
4. MapReduce
A distributed data processing model and execution environment that runs on large clusters of commodity machines.
5. Oozie
A service for managing workflows of Hadoop jobs
6. Pig
A data flow language and execution environment for exploring very large datasets. Pig runs on HDFS and MapReduce clusters.
7. Sqoop
A tool for efficient bulk transfer of data between structured data stores (such as relational databases) and HDFS.
8. ZooKeeper
A distributed, highly available coordination service. ZooKeeper provides primitives such as distributed locks that can be used for building distributed applications.
Hadoop the definitive guide / Tom White.
Avro
A serialization system for efficient, cross-language RPC and persistent data storage.
1. Hadoop Distributed File System (HDFS)
A distributed filesystem that runs on large clusters of commodity machines.
2. Hive
A distributed data warehouse. Hive manages data stored in HDFS and provides a query language based on SQL (and which is translated by the runtime engine to MapReduce jobs) for querying the data.
3. HBase
A distributed, column-oriented database. HBase uses HDFS for its underlying storage, and supports both batch-style computations using MapReduce and point queries (random reads).
4. MapReduce
A distributed data processing model and execution environment that runs on large clusters of commodity machines.
5. Oozie
A service for managing workflows of Hadoop jobs
6. Pig
A data flow language and execution environment for exploring very large datasets. Pig runs on HDFS and MapReduce clusters.
7. Sqoop
A tool for efficient bulk transfer of data between structured data stores (such as relational databases) and HDFS.
8. ZooKeeper
A distributed, highly available coordination service. ZooKeeper provides primitives such as distributed locks that can be used for building distributed applications.
Thursday, March 21, 2013
using yum to manage software groups for CentOS
This is a simple entry and probably you will know already. Just a reminder for me too :)
List the available groups from all yum repos
List the description and package list of a group
Remove all of the packages in a group
List the available groups from all yum repos
# yum grouplist
List the description and package list of a group
# yum groupinfo "General Purpose Desktop"
Remove all of the packages in a group
# yum groupremove "System administration tools"
Thursday, March 14, 2013
First peta-scale HPC to use warm-water liquid cooling
The Article is taken from National Renewable Energy Lab Picks Xeon Phi
Here are some excerpts from the Article
National Renewable Energy Laboratory (NREL) is the first peta-scale HPC to use warm-water liquid cooling, earning it the world’s number one rating in power-usage effectiveness of PUE=1.06!
......
......
.......
The direct-component liquid cooling system supplies servers with warm water (75 degrees Fahrenheit) that is piped over processors to remove excess heat, returning water heated to approximately 100 degrees F. Because only 25 degrees of heat must be removed by the water-cooling system, the energy efficient set-up eliminates the power-hungry compressors needed for traditional air-cooling systems.
Excess heat generated by the new HPC will function as the primary room-heating technology for its Golden, Colo., data center, and will heat walkways outside buildings to melt snow and ice. This holistic approach, defined by the NREL’s Energy Systems Integration Facility (ESIF), will save as much as $1 million per year on power needed to run a conventional air-cooled data center.
Here are some excerpts from the Article
National Renewable Energy Laboratory (NREL) is the first peta-scale HPC to use warm-water liquid cooling, earning it the world’s number one rating in power-usage effectiveness of PUE=1.06!
......
......
.......
The direct-component liquid cooling system supplies servers with warm water (75 degrees Fahrenheit) that is piped over processors to remove excess heat, returning water heated to approximately 100 degrees F. Because only 25 degrees of heat must be removed by the water-cooling system, the energy efficient set-up eliminates the power-hungry compressors needed for traditional air-cooling systems.
Excess heat generated by the new HPC will function as the primary room-heating technology for its Golden, Colo., data center, and will heat walkways outside buildings to melt snow and ice. This holistic approach, defined by the NREL’s Energy Systems Integration Facility (ESIF), will save as much as $1 million per year on power needed to run a conventional air-cooled data center.
Free Instructional Video from Vmware
Free Instructional Videos Delivered by our Education Instructors . These short instructional videos provide you with helpful product overviews and detailed demonstrations installing, configuring, deploying Vmware related producted. These includes
- vSphere 5.x
- ESXi
- Site Recovery Manager
- vCenter Operations, Orchestrator, Protect
- vCloud Director
- vFabric/Spring
- View
- vSphere Storage Appliance
- Zimbra
Wednesday, March 13, 2013
Backup for /etc/passwd , /etc/group , /etc/shadow
/etc/passwd /etc/group /etc/shadow are very essential file systems for Linux. You should not be surprised that linux does do a backup of the /etc/passwd /etc/group /etc/shadow. They are represented by
/etc/passwd-
/etc/group-
/etc/shadow-
So in case your any of your /etc/passwd /etc/group /etc/shadow are corrupted, just do a copy and replace
/etc/passwd-
/etc/group-
/etc/shadow-
So in case your any of your /etc/passwd /etc/group /etc/shadow are corrupted, just do a copy and replace
Tuesday, March 12, 2013
Independent Test: Xeon Phi Shocks Tesla GPU
This article is taken from Go Parallel: Independent Test: Xeon Phi Shocks Tesla GPU
Intel’s Xeon Phi coprocessor outperforms Nvidia’s Tesla graphic-processing unit (GPU) on the operations used by “solver” applications in science and engineering, according to independent tests at Ohio State University.
When comparing Intel’s Xeon Phi to Nvidia’s Tesla, most reviewers dwell on how much easier it is to rewrite parallel programs for the Intel coprocessor, since it runs the same x86 instruction set as a 64-bit Pentium.
Nvidia’s “Cuda” cores on its Tesla coprocessor, on the other hand, do not even try to emulate the x86 instruction set, opting instead for more economical instructions that allow it to cram many more cores on a chip.
As a result, Nvidia’s Tesla has 40-times more cores (2,496) than Intel’s Xeon Phi (60). The question then becomes: “is it worth it” to rewrite x86 parallel software for Nvidia’s Cuda, in order to gain access to the thousands of more cores available with Tesla over Xeon Phi?
......
......
Do read the article Independent Test: Xeon Phi Shocks Tesla GPU
Friday, March 8, 2013
Running Linpack (HPL) Test on Linux Cluster with OpenMPI and Intel Compilers
HPL is a software package that solves a (random) dense linear system
in double precision (64 bits) arithmetic on distributed-memory
computers. It can thus be regarded as a portable as well as freely
available implementation of the High Performance Computing Linpack
Benchmark.
The algorithm used by HPL can be summarized by the following keywords: Two-dimensional block-cyclic data distribution – Right-looking variant of the LU factorization with row partial pivoting featuring multiple look-ahead depths – Recursive panel factorization with pivot search and column broadcast combined – Various virtual panel broadcast topologies – bandwidth reducing swap-broadcast algorithm – backward substitution with look-ahead of depth 1.
1. Requirements:
2. Installing BLAS, LAPACK and OpenMPI, do look at
For the configuration and compilation of hpl, see Running Linpack (HPL) Test on Linux Cluster with OpenMPI and Intel Compilers
The algorithm used by HPL can be summarized by the following keywords: Two-dimensional block-cyclic data distribution – Right-looking variant of the LU factorization with row partial pivoting featuring multiple look-ahead depths – Recursive panel factorization with pivot search and column broadcast combined – Various virtual panel broadcast topologies – bandwidth reducing swap-broadcast algorithm – backward substitution with look-ahead of depth 1.
1. Requirements:
2. Installing BLAS, LAPACK and OpenMPI, do look at
- Building BLAS Library using Intel and GNU Compiler
- Building LAPACK 3.4 with Intel and GNU Compiler
- Building OpenMPI with Intel Compilers
- Compiling ATLAS on CentOS 5
For the configuration and compilation of hpl, see Running Linpack (HPL) Test on Linux Cluster with OpenMPI and Intel Compilers
Tuesday, March 5, 2013
Using iostat to report system input and output
The iostat command is used to monitor system input/output device loading by observing the time the device are active in relation to their average transfer rate.
“-m” = Display statistics in megabytes per second
“2 10″ = 2 seconds for 10 times
-x = Display extended statistics.
AVG-CPU Statistics
# iostat -m 2 10 -x /dev/sda1 avg-cpu: %user %nice %system %iowait %steal %idle 0.31 0.00 0.50 0.19 0.00 99.00 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sda1 0.00 30.00 0.00 14.00 0.00 0.17 25.14 0.85 60.71 5.50 7.70where
“-m” = Display statistics in megabytes per second
“2 10″ = 2 seconds for 10 times
-x = Display extended statistics.
AVG-CPU Statistics
- “%user” = % of CPU utilisation that occurred while executing at the user level (application)
- “%nice” = % of CPU utilisation that occurred while executing at the user level with nice priority
- “%system” = % of CPU utilisation that occurred while executing at the system level (kernel)
- “%iowait” = % of time CPU were idle during which the system had an outstanding disk I/O request
- “%steal” = % of time spent in involuntary wait by the virtual CPU or CPUs while the hypervisor was servicing another virtual processor.
- “%idle” = % of time that the CPU or CPUS were idle and the system does not have an outstanding disk I/O request
- “rrqm/s” = The number of read requests merged per second that were queued to the device.
- “wrqm/s” = The number of write requests merged per second that were queued to the device
- “r/s” = The number of read requests that were issued to the device per second.
- “w/s” = The number of write requests that was issued to the device per second.
- “rMB/s” = The number of megabytes read from the device per second.
- “wMB/s” = The number of megabytes written to the device per second.
- “avgrq-sz” = The average size (in sectors) of the requests that were issued to the device.
- “avgqu-sz” = The average queue length of the requests that were issued to the device.
- “await” = The average time (in milliseconds) for I/O requests issued to the device to be served.
- “svctm” = This field will be depreciated
- “util” = Percentage of CPU time during which I/O requests were issued to the device (bandwidth utilization for the device).
Saturday, March 2, 2013
Sequential execution of the Parallel or serial jobs on OpenPBS / Torque
If you have an requirement to execute jobs in sequence after the 1st job has completed, only then the 2nd job can launch, you can use the -W command
For example, if you have a running job with a job ID 12345, and you run the next job to run only after job 12345 run.
Reference:
$ qsub -q clusterqueue -l nodes=1:ppn=8 -W depend=afterany:12345 parallel.sh -v file=mybinaryfileYou will notice that the job will hold until the 1st job executed.
..... ..... 24328 kittycool Hold 2 10:00:00:00 Sat Mar 2 02:20:12
Reference:
Friday, March 1, 2013
xrdp_mm_process_login_response: login failed
If you encountered this error xrdp_mm_process_login_response: login failed when you use the remote desktop connection to connection to a vnc session.
Even if you restart xrdp, the error still remain, the issue could be due to locked X11 session that was created by xrdp.
To solve the issue, go to the/tmp/.X11-unix/ and find your X session and delete the session.
Do a listing
Look at the session owned by you which you wished to delete
If xrdp still fails, it seems that it is due to orphaned X--. Once xrdp hits an orphaned X-- which may or may not be from other users, the error will still remain.
To see the orphaned X11 session, you can do a vncserver, which you will see something like that
Delete all the orphaned X--
Restart the xrdp service and try the remote connection.
If you are still having the issue, do look at alternative solution
Even if you restart xrdp, the error still remain, the issue could be due to locked X11 session that was created by xrdp.
To solve the issue, go to the/tmp/.X11-unix/ and find your X session and delete the session.
# cd /tmp/.X11-unix
Do a listing
# ls -l
Look at the session owned by you which you wished to delete
..... ..... srwxrwxrwx 1 root root 0 Jul 9 2012 X0 srwxrwxrwx 1 user1 users 0 Jan 25 09:13 X1 srwxrwxrwx 1 user2 users 0 Jul 10 2012 X10 srwxrwxrwx 1 user3 users 0 Feb 19 13:31 X11 srwxrwxrwx 1 user4 users 0 Nov 20 15:10 X12 srwxrwxrwx 1 user5 users 0 Jul 10 2012 X13 ..... .....Delete the session......
If xrdp still fails, it seems that it is due to orphaned X--. Once xrdp hits an orphaned X-- which may or may not be from other users, the error will still remain.
To see the orphaned X11 session, you can do a vncserver, which you will see something like that
# vncserver
Warning: Head-Node:1 is taken because of /tmp/.X11-unix/X1 Remove this file if there is no X server Head-Node:1
Delete all the orphaned X--
Restart the xrdp service and try the remote connection.
# service xrdp restart
If you are still having the issue, do look at alternative solution
Subscribe to:
Posts (Atom)