Performance of Garey–Johnson algorithm for pipelined typed tasks systems



This paper studies the generalization of the first Garey–Johnson algorithm, which minimizes the maximum lateness on two parallel processors to the case of unitary typed tasks systems with constant delays. The performance of the extended algorithm is evaluated through worst-case analysis. If all the tasks have the same type and no delay is considered, then the upper bound obtained coincides with the upper bound for the Garey–Johnson algorithm on identical processors, which is one of the best known for the maximum lateness problem.