Developing novel design methodologies for both individual processors and whole computing systems.
Research topics
High-level GPU Programming
To exploit the performance potential of massively parallel accelerators like GPUs, expert programmers need to tune their code in low-level languages like OpenCL or Cuda C for the parameters and features of specific GPU platforms and models. This hampers the use of those accelerators by non-GPU experts, as is needed in the domains of artificial intelligence, machine vision, and big data. We focus on the high-level programming language Julia. We build on its meta-programming, just-in-time compilation and strong type inference capabilities, and we redesign, extend and open up compiler interfaces such that the existing JULIA compiler infrastructure and code can be reused as much as possible while still mapping compute kernels into efficient GPU code. The results is code that is equally efficient as Cuda C code, that can be written with an order of magnitude less effort, and does not need to be retuned for specific GPU models.