{subsection: Basic Guidelines for Large HTCondor Pools}
 
-1.   Upgrade to HTCondor 8 if you are still using something older than that.  It has many scalability improvements.  It is also a good idea to upgrade your configuration based on the defaults that ship with HTCondor 8, because it also contains updated settings that improve scalability.
+1:   Upgrade to HTCondor 8 if you are still using something older than that.  It has many scalability improvements.  It is also a good idea to upgrade your configuration based on the defaults that ship with HTCondor 8, because it also contains updated settings that improve scalability.
 
-2.   Put central manager (collector + negotiator) on a machine with sufficient memory and 2 cpus/cores primarily dedicated to this service.
+1:   Put central manager (collector + negotiator) on a machine with sufficient memory and 2 cpus/cores primarily dedicated to this service.
 
-3.   If convenient, take advantage of the fact that you can use multiple submit machines.  At least dedicate one machine as a submit machine with few or no other duties.
-
-4.   Under UNIX, increase the number of jobs that a schedd will run simultaneously if you have enough disk bandwidth and memory (see rough estimates in next section).  As of HTCondor 7.4.0, the default setting for MAX_JOBS_RUNNING should be a reasonable formula that scales with the amount of memory available.  In prior versions, the default was just 200.
+1:   If convenient, take advantage of the fact that you can use multiple submit machines.  At least dedicate one machine as a submit machine with few or no other duties.
 
+1:   Under UNIX, increase the number of jobs that a schedd will run simultaneously if you have enough disk bandwidth and memory (see rough estimates in next section).  As of HTCondor 7.4.0, the default setting for MAX_JOBS_RUNNING should be a reasonable formula that scales with the amount of memory available.  In prior versions, the default was just 200.
 {code}
 MAX_JOBS_RUNNING = 2000
 {endcode}
 
-5.   Under Windows, you _might_ be able to increase the maximum number of jobs running in the schedd, but only if you also increase desktop heap space adequately.  The problem on Windows is that each running job has an instance of condor_shadow, which eats up desktop heap space.  Typically, this heap space becomes exhausted with on the order of only ~100 jobs running.  See {link: http://www.cs.wisc.edu/condor/manual/v7.0/7_4Condor_on.html#SECTION008413000000000000000 My submit machine cannot have more than 120 jobs running concurrently. Why?} in the FAQ.
+1:   Under Windows, you _might_ be able to increase the maximum number of jobs running in the schedd, but only if you also increase desktop heap space adequately.  The problem on Windows is that each running job has an instance of condor_shadow, which eats up desktop heap space.  Typically, this heap space becomes exhausted with on the order of only ~100 jobs running.  See {link: http://www.cs.wisc.edu/condor/manual/v7.0/7_4Condor_on.html#SECTION008413000000000000000 My submit machine cannot have more than 120 jobs running concurrently. Why?} in the FAQ.
 
-6.   Put a busy schedd's spool directory on a fast disk with little else using it.  If you have an SSD use the JOB_QUEUE_LOG config knob to put the job_queue.log file, the schedd's database, on the SSD drive.
+1:   Put a busy schedd's spool directory on a fast disk with little else using it.  If you have an SSD use the JOB_QUEUE_LOG config knob to put the job_queue.log file, the schedd's database, on the SSD drive.
 
-7.   If running a lot of big standard universe jobs, set up multiple checkpoint servers, rather than doing all checkpointing onto the submit node.
+1:   Do not put your log files on NFS or network storage, especially for very busy daemons like the Schedd and Shadows.
 
-8.   If you are not using strong security (i.e. just host IP authorization) in your HTCondor pool, then you can turn off security negotiation to reduce overhead:
+1:   If running a lot of big standard universe jobs, set up multiple checkpoint servers, rather than doing all checkpointing onto the submit node.
 
+1:   If you are not using strong security (i.e. just host IP authorization) in your HTCondor pool, then you can turn off security negotiation to reduce overhead:
 {code}
 SEC_DEFAULT_NEGOTIATION = OPTIONAL
 {endcode}
 
-9.   If you are not using condor_history (or any other means of reading the job history, including history in Quill), turn it off to reduce overhead:
-
+1:   If you are not using condor_history (or any other means of reading the job history, including history in Quill), turn it off to reduce overhead:
 {code}
 HISTORY =
 {endcode}
 
-10.   If you do not allow preemption by user priority or machine rank expression in your pool (i.e. not just preventing job killing with MaxJobRetirementTime, but completely disallowing claims from being preempted), then you can reduce overhead in the negotiator:
-
+1:   If you do not allow preemption by user priority or machine rank expression in your pool (i.e. not just preventing job killing with MaxJobRetirementTime, but completely disallowing claims from being preempted), then you can reduce overhead in the negotiator:
 {code}
 NEGOTIATOR_CONSIDER_PREEMPTION = False
 {endcode}
 
-11. Some general linux scalability tuning advice may be found {wiki: LinuxTuning here}.
+1: Some general linux scalability tuning advice may be found {wiki: LinuxTuning here}.
 
 {subsection: Rough Estimations of System Requirements}