Security Guarantees for Automated Software Testing

It is well-known that testing can only show the presence of bugs but not their absence. Unlike verification, testing does not provide any formal guarantees about the correctness of a program, or the absence of vulnerabilities. This leaves practitioners and security researchers to make unfounded judgement calls: When is it safe to stop the fuzzer with a reasonable residual risk? Which assurances does a fuzzing campaign provide that exposes no bugs? How much longer should the fuzzer be run to achieve an acceptable residual risk?

The objective of this ARC-funded PhD position is to build the first scientific framework to provide such answers with quantifiable accuracy. The candidate will explore and extend various probabilistic and statistical frameworks. Practitioners should be able to leverage a rich statistical toolset to assess residual risk, to obtain statistical guarantees, and to analyze the cost-benefit trade-off for ongoing fuzzing campaigns. As a first starting point, the perspective of software testing as species discovery (STADS) provides access to a substantial biostatistical framework in ecology to tackle this fundamental challenge. A recent vision statement provides a large number of concrete opportunities for future research.

The applicant should have

You can find more information here: Assurances in Software Testing: A Roadmap, a vision statement Software Testing as Species Discovery, a statistical framework for software testing Monash: How to apply The PhD student is fully funded by an ARC grant (DECRA) throughout her or his candidature. For international students, Faculty will also cover the tuition fees and a Overseas Student Health Cover (OSHC). The Faculty of IT will support attendance at conferences.

People

Aldeida Aleti
Senior Lecturer
Thuan Pham
Research Fellow