SPIN Logo

Welcome to the SPIN Research Group!

The Sustainable, Programmable and INtelligent (SPIN) Computing Systems Research Group is led by faculty member Dr. Christina Giannoula at the Max Planck Institute for Software Systems (MPI-SWS).

Our research group is shaping the era of systems design through a fine-grained, application-driven perspective. We conduct groundbreaking research in computer architecture, computer systems, high-performance computing, and hardware/software co-design, empowering cutting-edge applications to deliver lasting, meaningful benefits for human life. Our research aims to create intelligent computing systems that adapt to each application by embedding its unique characteristics into runtime system behavior. We address fundamental challenges to achieve high performance, scalability, energy efficiency, sustainability, and programmability in modern applications. The vision our research group strives to realize is defined as adaptive application-aware computing: all software, system and hardware components dynamically adapt ("SPIN") in real-time to the unique characteristics of each application, maximizing performance, scalability, sustainability and energy efficiency.

Our group fosters a collaborative, rewarding, and creativity-driven environment. We have active partnerships with internationally recognized researchers from both academia and industry. Our academic collaborators include various groups from leading institutions (e.g., ETH Zurich, University of Toronto, University of Maryland, Barcelona Supercomputing Center and NTU Athens), and our industry partners come from major technological companies. We deeply value broad collaboration between academia and industry as a means to drive important impact.

We are always open to new collaborations from academia and industry—feel free to reach out!

Join Our Group

If you are interested in joining our research group, please complete this form.
• Prospective PhD Students:
We are actively recruiting PhD students! You may also submit an application to the MPI Software Systems Graduate Program.
• Prospective Master's Thesis Students:
We are currently accepting Master's thesis students for supervision!
• Research Interns:
We are actively recruiting research interns! You may also submit an application to the MPI for Software Systems Research Intern Program.
• Prospective PostDocs:
Please feel free to contact me directly via email and complete this form to discuss potential opportunities.

Research Areas

We are broadly interested in efficient computing system design, working across multiple layers of the computing stack—including algorithms, compilers, runtime systems, programming frameworks, and hardware design. Our research efforts center on advancing cutting-edge applications in areas like data analytics, databases, Deep Learning (DL), generative Artificial Intelligence (AI), and physical AI, and improving key goals of performance, scalability, energy efficiency, sustainability, and programmability.

Our research interests include, but are not limited to, the following topics.

Data-Centric Computing Stacks for Memory-Centric Hardware
In the current era of Big Data, cutting-edge applications are processing increasingly large volumes of data. When executed on contemporary systems and architectures, such applications incur substantial performance and energy overheads, primarily due to the high cost of data movement between memory and processors. Emerging memory-centric technologies, such as Processing-Near-Memory (PNM) and Memory Disaggregation (e.g., CXL), aim to overcome this fundamental data movement bottleneck. PNM integrates low-power processing units directly within memory devices, enabling data to be processed where it is stored, and thereby reducing costly data transfers. Memory disaggregation, enabled by advanced interconnects such as CXL, allows compute nodes to access large pools of remote memory resources at relatively low access costs, addressing the growing demand for memory capacity in data-intensive workloads.

While many research efforts focus on designing application-specific accelerators that exploit PNM and CXL, we believe that widespread adoption of these memory-centric technologies will only occur when they are seamlessly integrated into general-purpose computing stacks. Crucially, these stacks must not only effectively support data-centric computing, but also preserve established programming models and traditional application development paradigms. Our research aims to comprehensively address implementation and integration challenges of memory-centric technologies by rethinking and redesigning general-purpose computing stacks—from algorithms and programming libraries to compilers and runtime frameworks. The overarching vision is to transform these stacks into truly data-centric systems, thereby unlocking unprecedented improvements in performance and energy efficiency for next-generation Big Data applications.

Context-Aware System Design for Generative and Physical AI
Generative AI models—such as Large Language Models (LLMs), diffusion models, and neural rendering models (e.g., 3D Gaussians, NeRFs)—as well as physical AI models (e.g., robotics transformers, point cloud networks, reinforcement learning-based control models) are pushing the limits of modern computing infrastructures in both compute capabilities and memory capacity. Current AI systems, however, often treat these AI models as generic computational kernels (e.g., matrix multiplications, key-value lookups, filtering operations), and largely overlook the rich contextual and semantic characteristics these models capture from, or interact with, the real world.

For instance, generative AI models inherently operate with contextual signals derived from user input, background knowledge, preferences, and/or conversation history. Likewise, physical AI models rely on semantic and environmental context, real-world dynamics, and the technological constraints of embodied hardware such as sensors, motors, and robotic embodiments. These context-dependent characteristics, which fundamentally shape how such models reason and act, are significantly overlooked in today's AI systems design. Runtime systems, compilers, serving frameworks, and resource managers fail to exploit this context-dependent information, leaving significant opportunities for optimization untapped.

Our research aims to establish context-aware optimizations in the design of AI systems as a new computing paradigm for enhancing the efficiency and enabling new capabilities of generative and physical AI. We will explore integrating context signals—from user input, environment, and device constraints—into core AI system components (including algorithms, runtime systems, schedulers, compilers, serving frameworks, and hardware engines) to drive high-impact AI applications for society.

Application-Driven Sustainable Computing
Recent studies indicate that information and communication technologies account for approximately 3% of global greenhouse gas emissions, a figure projected to rise to 8% within the next decade. A key driver of this increase is the rapidly growing demand for generative AI models, which rely on power-hungry GPUs to meet stringent performance requirements. In response, leading cloud providers—including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP)— have begun deploying carbon dashboards and related tools that provide users with individualized carbon footprint estimates.

We strongly believe that both the systems and AI research communities must prioritize carbon-aware design to mitigate the environmental impact of rapidly expanding computing demands. Moving forward, our research aims to investigate application-driven carbon footprint reduction strategies in systems design, with the overarching goal of enabling green and sustainable computing solutions. Our vision is to create software and hardware components that leverage application characteristics to minimize carbon emissions, while achieving negligible or no impact on performance.

As an initial step, we will assess, measure, and characterize how application execution impacts energy consumption and carbon emissions in modern computing systems, in order to identify key sources of inefficiency. By systematically analyzing the trade-offs between performance and carbon costs across different microarchitectural components (e.g., caches, prefetchers, processor frequency scaling), while also incorporating application-specific metrics, we aim to develop both practical recommendations and innovative green system designs that extend hardware lifespans and enhance carbon efficiency.