Memory Persistency: Future computing systems are expected to place persistent memories alongside DRAM on the memory bus. This work aims at developing new processor architectures and programming interfaces to fully exploit the benefits of persistent memories. This work comprises of the following projects:
– Precise notations for memory persistency models (NVMW ’15, pdf)
– High-performance transaction systems to safely update persistent memory (ASPLOS ’16, pdf)
– Reduce logging overheads in systems with persistent memory (ASPLOS’16, pdf)
– Novel architectures to support memory persistency models (MICRO ’16, pdf)
Hardware Acceleration: Rapidly processing text data is critical for many technical and business applications. This work develops a custom hardware accelerator, HARE, that eliminates most of the overheads observed in traditional text processing software, processing text at memory bandwidth speeds. This work also demonstrates a scaled down FPGA proof-of-concept (MICRO’16, pdf).
Instruction Prefetching: L1 instruction cache misses are a critical performance bottleneck for server applications. Prefetching helps mitigate the instruction fetch delays. This work simplifies and reduces the hardware and energy overheads of accurate instruction prefetching by exploiting the relationship among instruction misses, program contexts and the return-address-stack (MICRO ’13, pdf).
Simulator development and workload characterization: This work analyzes the memory system requirements for various server workloads using the gem5 architectural simulator. This work aided the development of an event-based DRAM memory controller in gem5 (ISPASS ’14, pdf).