DC4AI 2026
The International Workshop on Data Compression for AI and Big Data Applications
 
September 28 - October 1, 2026
Singapore

DC4AI

In cooperation with ACM

Held in conjunction with ICPP 2026: The 55th International Conference on Parallel Processing

Topics

Large Language Models (LLMs) spanning language, vision, audio, and other modalities are rapidly transforming the AI landscape, enabling a wide range of downstream applications. As demand for more capable models continues to rise, both model scale and training data volume have expanded substantially. Training, fine-tuning, and serving such models increasingly rely on large-scale high-performance computing (HPC) systems and remain highly resource- and time-intensive.

Data compression has emerged as a promising means of mitigating communication and data-movement overhead in distributed and parallel environments for modern AI and big-data workloads. Because data movement across the Internet, inter-node networks, and system interconnects has become a major determinant of both runtime and energy consumption, efficient mechanisms for data transfer and analysis are increasingly critical.

This workshop addresses key research challenges in reducing data-movement and communication costs for large-scale AI and big-data applications, including model training, fine-tuning, inference, and emerging LLM-based agent and multi-agent systems.

Topics of interest include but are not limited to:

• Data Compression Methods

  ° Compression Techniques for Structured and Unstructured Scientific Data

  ° Image, Video, and Multimedia Data Compression

  ° Time-series Data Compression

  ° Textual Data Compression (Natural Language, Logs)

  ° Quantization and Data Reduction

  ° Predictive Coding and Transform-based Compression

  ° Dictionary-based and Entropy-based Compression

  ° Tensor Decomposition and Low-rank Approximations

  ° Compression-aware Data Mining and Machine Learning

  ° Compression for Accelerating Data Analytics

• Applying Data Compression in AI-Related Applications and Systems

  ° Large-Scale AI Model Training

  ° Large-Scale AI Model Fine-Tuning

  ° Large-Scale AI Inference/Serving

  ° LLMs-Based Agent and Multi-Agent System Designing

  ° Data Compression for Communication Reduction

  ° Data Compression to Reduce Memory and Storage Overhead

• Hardware Co-Design for Applying Data Compression in Emerging AI Applications, Big Data Applications, and Quantum Computing

  ° GPUs

  ° FPGAs

  ° Quantum Computing Platforms

  ° CXL: Compute Express Link

  ° PIM: Process in Memory

  ° RISC-V

  ° ARM

Submission

Important Dates

Selection of program committee members: June 1, 2026 (AoE)

Paper submission deadline: June 10, 2026 (AoE)

Author notification: June 30, 2026 (AoE)

Camera-ready final papers: July 30, 2026 (AoE)

Submissions

• Papers should be submitted electronically on the ICPP submission system:

https://ssl.linklings.net/conferences/icpp/

• Paper submission must be in ACM format:

https://www.acm.org/publications/proceedings-template

• DC4AI will accept full papers (limited to 10 pages including references) and short papers (6 pages, including references and appendix).

• Submitted papers will be evaluated by at least 3 reviewers based on technical merits.

• DC4AI encourages submissions to provide artifact description & evaluation.

• Accepted papers that are presented in the workshop will be published in the ACM Digital Library.

Committee Members

Workshop Organizers

Xiaoyi Lu, University of Florida, USA (xiaoyilu@ufl.edu)

Xiaodong Yu, Stevens Institute of Technology, USA (xyu38@stevens.edu)

Zhaorui Zhang, Hong Kong Polytechnic University, Hong Kong (zhaorui.zhang@polyu.edu.hk)

Invited Speaker

Sheng Di, Argonne National Laboratory, USA (sdi1@anl.gov)

Tentative Agenda

September 28 - October 1, 2026 (Exact date TBD)
Time: 9:00am - 12:35pm;  Location: Singapore
In conjunction with ICPP 2026

9:00
9:05
Welcome & Introduction
9:05
10:00
Invited Talk
Dr. Sheng Di, Argonne National Laboratory
10:00
12:30
Paper Presentations
12:30
12:35
Closing Remarks

This half-day workshop will feature sessions for peer-reviewed paper presentations and an invited talk on data analysis and reduction for emerging AI and big data applications. Each submission will be reviewed by at least three reviewers. The workshop invites researchers in computer science and applied mathematics to present state-of-the-art data compression techniques, and domain scientists to discuss application challenges in current practice, including emerging LLM agent systems and quantum computing.

The workshop addresses three key questions: where compression should be applied to maximize performance gains; which techniques — including scalable, lossless, lossy, and error-bounded lossy compression — are most effective; and how these methods can be integrated into AI and big-data systems to reduce overhead and improve end-to-end efficiency.

  • Call for Papers

    Learn more about the topics and submission guidelines

    Learn More

Prior Workshops

Related workshops organized by committee members

Data Analysis and Reduction for Big Scientific Data (DRBSD)

Held in conjunction with the SC Conference Series

DRBSD-11    DRBSD-10    DRBSD-9    DRBSD-8

International Workshop on Big Data Reduction (IWBDR)

Held in conjunction with the IEEE BigData Conference

IWBDR 2020    IWBDR 2021    IWBDR 2022    IWBDR 2023    IWBDR 2024    IWBDR 2025

Workshop on Fault Tolerant Systems (FTS)

2nd Workshop on Fault Tolerant Systems (FTS16), held in conjunction with IEEE CLUSTER 2016, Taipei, Sept. 16, 2016