Software testing is invaluable in ensuring reliability. It is also difficult and expensive, with serious consequences. Automation is critical in controlling costs and focusing developers. A promising avenue of automation is search-based test generation - the framing of tasks such as input selection as optimization problems. Effective search-based generation relies on the selection of the right strategies to guide the search. Current approaches to search-based generation fail to match the effectiveness of human developers. This is because the strategies employed are largely based on universal "structural coverage criteria" that simply dictate that code elements must be executed, but place few constraints on how they are executed.
Developers are driven by context - domain, requirements, and past experience. Effective automated test generation requires this context to control "how" code is executed. The next generation of test generation tools must be multi-objective, incorporate domain-specific information, and be adaptable to the system under test. We propose two families of context-infused strategies and a new self-adaptive generation approach that can customize its selection of strategies. This project will be conducted in iterative cycles of literature review, idea conception, prototyping, and evaluation. When complete, this project should yield "human-like" - and human-competitive - results.