-{section: Map-Reduce Jobs under Condor}
+{section: Map-Reduce Jobs under HTCondor}
 
-{subsection: NOTE: If you want to try MapReduce with Condor, you want to download the file "mrscriptVanilla.tar.gz}
+{subsection: NOTE: If you want to try MapReduce with HTCondor, you want to download the file "mrscriptVanilla.tar.gz}
 
 {subsection: Introduction}
 
-Condor provides support for starting Hadoop HDFS services, namely Name- and Datanodes. HDFS data access is up the the user's application, through, including the usage of Hadoop's MapReduce framework. However, we provide a submit file generator script for submitting MapReduce jobs into the vanilla universe (1 jobtracker and n tasktrackers, where n is specified by the user)
+HTCondor provides support for starting Hadoop HDFS services, namely Name- and Datanodes. HDFS data access is up the the user's application, through, including the usage of Hadoop's MapReduce framework. However, we provide a submit file generator script for submitting MapReduce jobs into the vanilla universe (1 jobtracker and n tasktrackers, where n is specified by the user)
 
-Why running MapReduce job under Condor at all?
+Why running MapReduce job under HTCondor at all?
 
-1:   Condor has powerful match making capabilities using excellent framework based on Class-Ad mechanism.  These capabilities can be exploited to implement multiple policies for a MR cluster beyond the current capabilities of existing frameworks.
+1:   HTCondor has powerful match making capabilities using excellent framework based on Class-Ad mechanism.  These capabilities can be exploited to implement multiple policies for a MR cluster beyond the current capabilities of existing frameworks.
 
-1: MR style of computation might not be suitable for all sorts of applications or problems (e.g. the ones which are inherently sequential).  A support for multiple execution environments is needed along with different set of policies for each environment. Condor supports a wide variety of execution environment including MPI style jobs, VMWare job etc.
+1: MR style of computation might not be suitable for all sorts of applications or problems (e.g. the ones which are inherently sequential).  A support for multiple execution environments is needed along with different set of policies for each environment. HTCondor supports a wide variety of execution environment including MPI style jobs, VMWare job etc.
 
 1: Perhaps, one of the bigger advantages is related to capacity management with a large shared MR cluster. Currently, the Hadoop MR framework has a very limited support for managing users' job priorities.
 
 {subsection: Prerequisites}
 
-You need to have a distributed file system setup e.g. Hadoop distributed file system (HDFS).  Starting from version 7.5 Condor comes with a storage daemon that provides support for HDFS. More details about our HDFS daemon can be found in Condor manual (see section 3.3.23 and 3.13.2).  Apart from these python version 2.4 or above is required on all the machines, which are part of PU.
+You need to have a distributed file system setup e.g. Hadoop distributed file system (HDFS).  Starting from version 7.5 HTCondor comes with a storage daemon that provides support for HDFS. More details about our HDFS daemon can be found in HTCondor manual (see section 3.3.23 and 3.13.2).  Apart from these python version 2.4 or above is required on all the machines, which are part of PU.
 
 {subsection: Submitting a Job}
 
 {subsubsection: Getting required files}
 
-We have written a handy python script that takes care of a lot of configuration steps involved in creating a job description file. It generates a specially crafted Condor job submit file for your job. It can then be submitted to Condor scheduler using the same script to get back the tracking URL of the job. This URL is where the Job-Tracker' embedded web-server is running. The information about this URL is published as a Job-Ad by mrscriptVanilla once the Tracker is setup.  Using the script you can specify: number of slots (CPUs) to be used, MR cluster parameters e.g. capacity of each Tasktracker, job jar file or a script file if you are trying to submit a set of jobs and also Condor job parameters e.g. 'requirement' attribute.
+We have written a handy python script that takes care of a lot of configuration steps involved in creating a job description file. It generates a specially crafted HTCondor job submit file for your job. It can then be submitted to HTCondor scheduler using the same script to get back the tracking URL of the job. This URL is where the Job-Tracker' embedded web-server is running. The information about this URL is published as a Job-Ad by mrscriptVanilla once the Tracker is setup.  Using the script you can specify: number of slots (CPUs) to be used, MR cluster parameters e.g. capacity of each Tasktracker, job jar file or a script file if you are trying to submit a set of jobs and also HTCondor job parameters e.g. 'requirement' attribute.
 
-This script will soon be part of Condor distribution but for now you can just use the latest version attached with this wiki.  The attached file (mrscriptVanilla.tar.gz) contains two files:
+This script will soon be part of HTCondor distribution but for now you can just use the latest version attached with this wiki.  The attached file (mrscriptVanilla.tar.gz) contains two files:
 
 *: mrscriptVanilla.py - the main script that generates the submit file and is also submitted as part of the user job to setup the cluster.