A large number of compiler transformations for optimizing programs have been implemented. Most optimizations for uniprocessors reduce the number of instructions executed by the program using transformations based on the analysis of scalar quantities and data-flow techniques. In contrast, optimizations for high-performance parallel processors maximize parallelism and memory locality with transformations that rely on tracking the properties of arrays using loop dependence analysis.
In previous works, algorithms were transformed only in special cases. So users execute many different algorithms in order to get high-performance parallel program. This causes much translation time and efforts to parallelize sequential programs.
This paper presents and develops the parallelizing algorithm using affine scheduling which deals with the problem of finding closed form schedules as affine functions of the iteration vector, and develops it. This algorithm considers dependence relation not only in the loop but also in the entire program, so it is performed faster and produces more parallelism.