Multi-objective optimization (MOO) problems in practical settings commonly encounter a significant challenge: objective and constraint functions that are computationally expensive to evaluate. Despite the efficiency of optimization algorithms, they inherently require the evaluation of numerous solutions to converge on a satisfactory result. This process can be time-consuming, often exceeding the practical time constraints of practitioners. While parallel and distributed computing has mitigated computational time to some extent, algorithmic efficiency remains a critical factor.
Metamodels, also known as surrogate models, represent a widely adopted strategy to approximate the functional form of exact objectives and constraints using a limited number of high-fidelity evaluations. Various methods for integrating metamodels into optimization algorithms have been documented in the literature. Nevertheless, the complexity and implementation challenges of these methods can be daunting for practitioners who are not experts in the field. A brief literature review indicates that approximately 75% of surrogate-assisted MOO studies rely on a static model, trained initially and utilized throughout the optimization process without further updates,
This study aims to propose straightforward guidelines or formulae that practitioners can employ to effectively distribute their high-fidelity evaluation budgets, thus enhancing the accuracy of optimization algorithm results. To this end, we have defined a bi-level multi-objective optimization algorithm to determine key evolution control parameters, such as update intervals, the number of infill solutions, the number of generations for the inner problem, and the initial training population size. Benchmark functions from the ZDT and DTLZ suites will be employed to validate the study.