CPU factors are used to differentiate the relative speed of different machines. LSF runs jobs on the best possible machines so that response time is minimized.
To achieve this, it is important that you define correct CPU factors for each machine model in your cluster.
Incorrect CPU factors can reduce performance the following ways.
If the CPU factor for a host is too low, that host might not be selected for job placement when a slower host is available. This means that jobs would not always run on the fastest available host.
If the CPU factor is too high, jobs are run on the fast host even when they would finish sooner on a slower but lightly loaded host. This causes the faster host to be overused while the slower hosts are underused.
Both of these conditions are somewhat self-correcting. If the CPU factor for a host is too high, jobs are sent to that host until the CPU load threshold is reached. LSF then marks that host as busy, and no further jobs are sent there. If the CPU factor is too low, jobs might be sent to slower hosts. This increases the load on the slower hosts, making LSF more likely to schedule future jobs on the faster host.
CPU factors should be set based on a benchmark that reflects your workload. If there is no such benchmark, CPU factors can be set based on raw CPU power.
The CPU factor of the slowest hosts should be set to 1, and faster hosts should be proportional to the slowest.
Consider a cluster with two hosts: hostA and hostB. In this cluster, hostA takes 30 seconds to run a benchmark and hostB takes 15 seconds to run the same test. The CPU factor for hostA should be 1, and the CPU factor of hostB should be 2 because it is twice as fast as hostA.