The Head-Related Transfer Function (HRTF) represents a unique and perceptually-relevant encoding of the acoustical interactions of a sound field with a listener. If properly acquired, HRTFs can be used to synthesize a multitude of individualized sound scenes which are indistinguishable from their real counterparts. Thus, accurately predicting the HRTFs for an individual can open the door to a multitude of potentially disruptive technologies - virtual reality, augmented reality, and enhanced perception to name a few.
One major obstacle in such applications gaining traction is that HRTFs are highly individualized and are very difficult to acquire accurately. Wave-based predictions of HRTFs are often seen as a silver bullet to this problem: they can be generated in a relatively short time and are based on well-confirmed and accurate physics models. However, such computed predictions are only partly convincing since they are rarely reported to satisfactorily match measured HRTFs.
This thesis investigates the quality and reliability of wave-based HRTF simulations by focusing on the finite-difference time-domain (FDTD) method. The difficulties and caveats encountered in assessing the legitimacy of a model are presented and exemplified for the HRTF problem. Limitations in the boundary modeling inherent to the FDTD method are studied. HRTF verification and validation studies are conducted both in the far field and in the near-field.
Results generally reveal the complexity of obtaining accurate HRTFs - be it with measurements or simulations. They suggest the need for a comprehensive assessment of both numerical and measurement errors due to the increased sensitivity of the HRTF features to the orientation and morphology of an individual. Consequently, this research advocates for improved and more rigorous treatments of HRTF acquisition in order to achieve successful synthesis of more complex auditory scenes.