Designing and developing the software systems that enable us to run machine learning (ML) inference workloads at state of the art latencies and efficiency on FPGA-based hardware accelerators. Working throughout the stack to define the ways we program AI accelerators, from co-designing the instruction sets together with the hardware engineering team, to developing the compilers and application APIs to interface the accelerator with ML models and frameworks.Responsibilities:Programming our bespoke hardware accelerators by writing compilers and DSLsCreating tools for debugging, profiling, and optimising programs for our acceleratorsDeveloping efficient applications and runtime libraries for server CPUs that utilise our acceleratorsKey Requirements:PhD or MSc in related fieldUsed Rust in production or open source codebasesExperience with low-level programming languages in general (e.g. Rust, C, C++) or functional programming languages (e.g. Haskell, OCaml, Nix)3 years of experience working on relevant areas such as performance sensitive- or systems programming, and compiler developmentPlease get in touch with daniel@microtech-global.com to hear more about this incredible position