I obtained my PhD degree from the
Dependable Systems Group at the Max Planck Institute for Software Sytems (MPI-SWS) and Saarland University. My advisor is Professor Rodrigo Rodrigues. I used to also closely working with Allen Clement. In Spring 2016, I did an internship in the System group at Microsoft Research Asia (Beijing) with Zhenyu Guo and Lidong Zhou. From Summer 2012 to Spring 2015, I was a visiting student at CITI / Universidade Nova de Lisboa and IST. In the summer of 2013, I did my internship at Microsoft Research Cambridge with Flavio Junqueira. Before coming to MPI-SWS, I got my B.S. in 2009 from Nankai University (in China) with a major in Computer Science. In Summer 2008, I joined an research exchange program at UCLA with Prof. Mario Gerla. Here is my resume and my google scholar profile.
My research interests are in distributed systems, operating systems and computer networking. Specially, I focus on dependability of cloud computing, data consistency and replication, testing and debugging concurrent programs, and big data analytics and management.
Currently, I am leading a project aimed at allowing developers to design and implement a both fast and consistent geo-replicated system. In this project, we first proposed a novel consistency model in which some operations require strong consistency semantics and the others accept weak consistency semantics. As a result, a geo-replicated system can be fast if there are many weak consistent operations that can be executed at different sites optimistically without being aware of other concurrent ones, and this system works properly without violating any application-specific invariants and diverging states, if only if all strong consistent operations are executed in a serialized order. Based on this consistency model, we built a distributed storage infrastructure to demonstrate the performance benefits.
Even with the precise classification principles identified by RedBlue consistency, however, the burden of deciding which consistency level to use for each operation on the programmer is substantial. Additionally, transforming legacy code to take advantage of eventual consistency requires a significant amount of work. To free Java programmers from these two challenging tasks, we designed and implemented a tool call SIEVE. It allows applications to automatically extract good performance when possible, while resorting to strong consistency whenever required by the target semantics. Taking as input a set of application-specific invariants and small annotations regarding merge semantics, SIEVE performs a combination of static and dynamic analysis, offline and at runtime.
In the first one and a half years, I had been working with Pedro Fonseca and Rodrigo Rodrigues. We conducted research that aims to improve the reliability of parallel applications in the presence of concurrency bugs. In particular, we explored how we can use the spare capacity that is available in multi-core processors to make programs more robust in the presence of such bugs.
Professional activities (recent)
- ACM Transactions on Storage reviewer
- IEEE/ACM Transactions on Networking reviewer
- APSys 2017 PC member 2017
- ICAC 2017 reviewer 2017
- DSN 2017 reviewer 2017
It is my great honor to work with the following brilliant computer scientists: Allen Clement, Pedro Fonseca,João Leitão, Johannes Gehrke, Daniel Porto , Nuno Preguiça, and Viktor Vafeiadis.