International Conference for High Performance Computing, Networking, Storage, and Analysis (Supercomputing)

SC 2022


Computing Systems



How to Submit
Overview
The SC Papers program is the leading venue for presenting high-quality original research, groundbreaking ideas, and compelling insights on future trends in high performance computing, networking, storage, and analysis. Technical papers are peer-reviewed and an Artifact Description is mandatory for all papers submitted to SC.
Areas/Tracks
Submissions will be considered on any topic related to high performance computing within the areas below. Authors must indicate a primary area from the choices on the submissions form and are strongly encouraged to indicate a secondary area.
Small-scale studies – including single-node studies – are welcome as long as the paper clearly conveys the work’s contribution to high performance computing.
Algorithms
The development, evaluation, and optimization of scalable, general-purpose, high performance algorithms.
Topics include:
Algorithms for discrete and combinatorial optimization
Algorithms for hybrid and heterogeneous systems with accelerators
Algorithms for numerical methods and algebraic systems
Data-intensive parallel algorithms
Energy- and power-efficient algorithms
Fault-tolerant algorithms
Graph and network algorithms
Load balancing and scheduling algorithms
Machine learning algorithms
Uncertainty quantification methods
Other high performance computing algorithms
Applications
The development and enhancement of algorithms, parallel implementations, models, software and problem solving environments for specific applications that require high performance resources.
Topics include:
Bioinformatics and computational biology
Computational earth and atmospheric sciences
Computational materials science and engineering
Computational astrophysics/astronomy, chemistry, and physics
Computational fluid dynamics and mechanics
Computation and data enabled social science
Computational design optimization for aerospace, energy, manufacturing, and industrial applications
Computational medicine and bioengineering
Improved models, algorithms, performance or scalability of specific applications and respective software
Use of uncertainty quantification, statistical, and machine-learning techniques to improve a specific HPC application
Other high performance applications
Architecture and Networks
All aspects of high performance hardware including the optimization and evaluation of processors and networks.
Topics include:
Architectural support for programming languages or software development.
Architectures to support extremely heterogeneous composable systems (e.g., chiplets)
Design-space exploration / Performance projection for future systems
Evaluation and measurement on testbed or production hardware systems
Hardware acceleration of containerization and virtualization mechanisms for HPC
Interconnect technologies, topology, switch architecture, optical networks, software-defined networks
I/O architecture/hardware and emerging storage technologies
Memory systems: caches, memory technology, non-volatile memory, memory system architecture (to include address translation for cores and accelerators)
Multi-processor architecture and micro-architecture (e.g., reconfigurable, vector, stream, dataflow, GPUs, and custom/novel architecture)
Network protocols, quality of service, congestion control, collective communication
Power-efficient design and power-management strategies
Resilience, error correction, high availability architectures
Scalable and composable coherence (for cores and accelerators)
Secure architectures, side-channel attacks, and mitigation
Software/hardware co-design, domain specific language support
Clouds and Distributed Computing
Cloud and system software architecture, configuration, optimization and evaluation, support for parallel programming on large-scale systems or building blocks for next-generation HPC architectures.
Topics include:
HPC, cloud, and edge computing convergence at infrastructure and software level, including service-oriented architectures and tools
Job/workflow scheduling, load balancing, resource provisioning, energy efficiency, fault tolerance, and reliability
Methods, systems, and architectures for big data and data stream processing in HPC and cloud systems
OS/runtime and system-software enhancements for many-core systems, accelerators, complex memory space/hierarchies, I/O, and network structures
Parallel programming models and tools at the intersection of cloud, edge, and HPC
Self-configuration, management, information services, monitoring, and introspective system software
Security and identity management in HPC and cloud systems
Scalable HPC and machine learning case studies on distributed and/or cloud systems
Virtualization and containerization to support HPC and emerging uses such as machine learning
Data Analytics, Visualization, and Storage
All aspects of data analytics, visualization, storage, and storage I/O related to HPC systems. Submissions on work done at scale are highly favored.
Topics include:
Cloud-based analytics at scale
Databases and scalable structured storage for HPC
Data mining, analysis, and visualization for modeling and simulation
Data analytics and frameworks supporting data analytics
Data reduction/compression on HPC and clouds for simulation, and experimental data
Ensemble analysis and visualization
I/O performance tuning, benchmarking, and middleware
In situ data processing
Image analysis and computer vision
Next-generation storage systems and media
Parallel file, object, key-value, campaign, and archival systems
Provenance, metadata, and data management
Reliability and fault tolerance in HPC storage
Scalable storage, metadata, namespaces, and data management
Storage tiering, entirely on-premise internal tiering as well as tiering between on-premise and cloud
Storage innovations using machine learning such as predictive tiering, failure, etc.
Storage networks
Scalable cloud, multi-cloud, and hybrid storage
Storage systems for data-intensive computing
Machine Learning (ML) with HPC
The development and enhancement of algorithms, systems, and software for scalable machine learning utilizing high-performance computing technology.
This area is primarily addressing the use of HPC to improve ML rather than the use of ML to improve any technology covered by other areas. Papers addressing the latter should be submitted to the respective areas.
Topics include:
HPC for ML
Data parallelism and model parallelism
Efficient hardware for machine learning
Hardware-efficient training and inference
Performance modeling of machine learning applications
Scalable optimization methods for machine learning
Scalable hyper-parameter optimization
Scalable neural architecture search
Scalable IO for machine learning
Systems, compilers, and languages for machine learning at scale
Testing, debugging, and profiling machine learning applications
Visualization for machine learning at scale
Performance Measurement, Modeling, and Tools
Novel methods and tools for measuring, evaluating, and/or analyzing performance for large scale systems.
Topics include:
Analysis, modeling, or simulation methods for performance
Methodologies, metrics, and formalisms for performance analysis and tools
Novel and broadly applicable performance optimization techniques
Performance studies of HPC hardware and software subsystems such as processor, network, memory, accelerators, and storage
Scalable tools and instrumentation infrastructure for measurement, monitoring, and/or visualization of performance
System-design tradeoffs between performance and other metrics (e.g., performance and resilience, performance and security)
Workload characterization and benchmarking techniques
Post-Moore Computing
Technologies that continue the scaling of supercomputing performance beyond the limits of Moore's law, including system architectures, programming frameworks, system software, and applications.
Moore’s law is a techno-economic model that has enabled the information technology industry to double the performance and functionality of digital electronics roughly every two years within a fixed cost, power, and area through advances in silicon lithography that have enabled this exponential miniaturization of electronics. However, as transistors reach atomic scale and fabrication costs continue to rise, the classical technological drivers that have underpinned Moore’s law for 50 years are waning, and new approaches must be found to advance supercomputing performance.
Topics include:
Hardware specialization and taming extreme heterogeneity
Beyond von-Neumann computer architectures
Special purpose computing (e.g., Anton or GRAPE)
Quantum computing
Neuromorphic and brain-inspired computing
Probabilistic, stochastic computing, and approximate computing
Novel post-CMOS device technologies and advanced packaging technologies for heterogeneous integration (evaluated in a supercomputing systems or application context)
Superconducting electronics for supercomputing
Programming models and programming paradigms for post-Moore systems
Tools for modeling, simulating, emulating, or benchmarking post-Moore and post-CMOS devices and systems
Programming Frameworks and System Software
Operating system, runtime system, technologies, and software building blocks that enable management of hardware resources and support parallel programming for large-scale systems.
Topics include:
Compiler analysis and optimization; program transformation and analysis, synthesis, and verification to enhance cross-platform portability, maintainability, result reproducibility, resilience (e.g., combined static and dynamic analysis methods, testing, formal methods)
Parallel programming languages, libraries, models, notations, application frameworks, and runtime systems
System software, and programming language and compilation techniques for reducing energy and data movement (e.g., precision allocation, use of approximations, tiling)
Solutions for parallel-programming challenges (e.g., support for global address spaces, interoperability, memory consistency, determinism, reproducibility, race detection, work stealing, or load balancing)
Tools and frameworks for parallel program development (e.g., debuggers and integrated development environments)
Approaches for enabling adaptive and introspective system software
OS and runtime system enhancements for attached and integrated accelerators
Interactions among the OS, runtime, compiler, middleware, and tools
Parallel/networked file system integration with the OS and runtime
Resource management, job scheduling, system interoperations and energy-aware techniques for large-scale systems
Runtime and OS management of complex memory hierarchies
State of the Practice
All aspects of the pragmatic practices of HPC, including operational IT infrastructure, services, facilities, large-scale application executions and benchmarks.
Papers are expected to capture experiences and ongoing practice relating to modern computing centers or HPC-related software. Papers do not need to cover novel research or developments, but they are expected to offer novel insights and lessons for HPC architects, developers, administrators, or users.
Topics include:
Bridging of cloud data centers and supercomputing centers
Energy and power efficiency of HPC and data centers
Comparative system benchmarking over a wide spectrum of workloads
Containers at scale: performance and overhead
Deployment experiences of large-scale hardware and software infrastructures and facilities
Facilitation of “big data” associated with supercomputing
Infrastructural policy issues, especially international experiences
Long-term infrastructure management experiences
Pragmatic resource management strategies and experiences
Monitoring and operational data analytics
Procurement, technology investment and acquisition best practices
Quantitative results of education, training, and dissemination activities
Software engineering best practices for HPC
User support experiences with large-scale and novel machines
Reproducibility of data
Preparing Your Submission
A paper submission has three components: the paper itself, an Artifact Description Appendix (AD), and an Artifact Evaluation Appendix (AE). The Artifact Description Appendix, or explanation of why there is no artifact description, is mandatory. The Artifact Evaluation Appendix is optional.
Eligibility
Papers that have not previously been published in peer-reviewed venues are eligible for submission to SC. For example, papers pre-posted to arXiv, institutional repositories, and personal websites (but no other peer-reviewed venues) remain eligible for SC submission.
Papers that were published in a workshop are eligible if they have been substantially enhanced (i.e., 30% new material).
Paper Format
Submissions are limited to 10 pages (U.S. letter – 8.5″x11″), excluding the bibliography, using the IEEE proceedings template.
AD and AE appendices are automatically generated and do not count against the 10 pages.
Authors of accepted papers may provide supplemental material with their final version of the paper (e.g., additional proofs, videos, or images).
Where to Submit
Papers are submitted via the SC submissions website.
Reproducibility Initiative
We believe that reproducible science is essential, and that SC continues to innovate in this area. There will be integration of the AD/AE Appendices into the review process with AD/AE Appendices considered at every stage of paper review. AD/AE Appendices will continue to be auto-generated from author responses to a standard form that is embedded in the SC online submission system. While the Artifact Description Appendix, or explanation of why there is no Artifact Description Appendix, is mandatory, the Artifact Evaluation Appendix continues to be optional. Learn more about the Reproducibility Initiative.
Review Criteria
Papers are peer-reviewed by a committee of experts. Each paper will have three to four reviews. The peer review process is double-blind for the paper and double-open for the Appendices. Paper reviewers do not have access to the names of authors. Appendices reviewers and authors will know each other’s names. While Papers Committee members are named on the SC22 Planning Committee page, the names of the individuals reviewing each paper are not made available to the paper authors. Learn more about the SC double-blind review policy, and see examples in the Papers FAQ.
Review, Response, Revision
From an author’s perspective, the following are the key steps:
Authors submit a title, abstract, and other metadata.
Authors submit their full paper.
Papers not respecting the submission guidelines will be subject to immediate reject without review. For example, papers not respecting the double-blind submission or papers exceeding the page limit.
After submission of their paper, authors have two weeks to complete an AD/AE form describing their computational artifacts (or lack of computational artifacts) and, optionally, text discussing how they evaluated their computational results. Failure to submit the form will result in rejection without review.
Authors of papers that reach the second review stage have an opportunity to revise their paper and prepare an accompanying response to the reviewers.
Author revisions and accompanying response will be available to the reviewers at least a week before the Papers Committee meeting.
Authors are notified of their paper’s status: Accept, Reject, or Major Revisions Required.
In the case of Major Revisions Required, authors prepare a major revision for a third stage review.
After the third stage review, the paper will be either accepted or rejected.
Authors of accepted papers prepare the final version of their paper.
Conflict of Interest
Please review the SC Conference Conflict of Interest guidelines before submitting your paper.
Plagiarism
Please see the IEEE guidelines on identifying plagiarism. Authors should submit new, original work that represents a significant advance from even their own prior publications.
Upon Acceptance
Registration
If your Paper is selected, at least one author must register for the Technical Program in order to attend the SC Conference and present the paper.
Finalizing Accepted Papers
Upon acceptance, all Papers (including those that go through major revisions) will be listed in the online SC Schedule.
Proceedings
Papers are archived in the ACM Digital Library and IEEE Xplore; members of SIGHPC or subscribers to the archives may access the full papers without charge. This publication contains the full text of all Papers and their Artifact Description appendices presented at the SC Conference.
On-Site
Schedule and Location
Paper presentations will be held Tuesday–Thursday, November 15–17, 2022. Paper sessions are 30 minutes. Day, time, and location for each paper session will be published in the online SC Schedule by September.
Infrastructure
Papers are assigned either a classroom or a theater room equipped with standard AV facilities:
Projector
Microphone and podium
Wireless lapel microphone or wireless handheld microphone
Projection screen
Awards
Best Paper (BP), Best Student Paper (BSP), and Best Reproducibility Advancement (BRA) nominations are made during the review process and are highlighted in the online SC schedule. BP, BSP, and BRA winners are selected by a committee who attends the corresponding paper presentations, and winners are announced at the Thursday Awards ceremony.