News
May 9, 2024: Announcing our Best Paper Winner: Clustering and Allocation of Spiking Neural Networks on Crossbar-Based Neuromorphic Architecture - by Ilknur Mustafazade, Nagarajan Kandasamy, Anup Das.
May 9, 2024: Announcing our Best Poster Winner: Hardware Assist for Linux IPC on an FPGA Platform - by Lars Nolte, Tim Twardzik, Camille Jalier, Jiyuan Shi, Thomas Wild, Andreas Herkersdorf.
May 3, 2024: Transfer to/from conference venue: If you want to use a transfer service to/from the hotel you can book the hotel transfer with the Transfer Service Form (in PDF and DOC). Or alternative transfers can be booked by sending an email to info@busischia.it with the subject “CF'24 Transfer” and providing information on your arrival time, train/flight number, a telephone number and the hotel destination in Ischia or through this Link.
April 18, 2024: Announcing our keynote speakers: Elia Merzari, Ph.D. - Pennsylvania State University and Piero Altoe, Ph.D. - NVIDIA.
April 17, 2024: Technical Program announced. See TECHNICAL PROGRAM for more details.
February 21, 2024: Call for Posters Submission Deadline Extended to February 25th, 2024. See Call for Posters page for more details.
February 16, 2024: Compiler Frontiers Workshop Submission Deadline Extended to March 1st, 2024. See workshops (CFW2024) for more details.
February 16, 2024: Booking for accomodation is available. See Venue/Accommodation for more details.
February 16, 2024: Registration portal is available now. See Registration for more details.
February 16, 2024: Please send us your request if you need an Invitation Letter for Visa. See Registration for more details.
February 12, 2024: Author Notification Deadline Extended: February 15th, 2024.
February 7, 2024: Deadline Extended for Collaborative Projects (call for abstracs): February 12th, 2024. See special sessions for more details.
February 1, 2024: Special session announced: Computer Architectures in Space. See SS (CompSpace) for more details.
January 29, 2024: Workshop announced: Malicious Software and Hardware in Internet of Things. See workshops (MALIOT24) for more details.
January 15, 2024: Call-for-posters announced. Submission Deadline: February 19th, 2024. See call-for-posters page for more details.
January 10, 2024: Workshops announced: CFW2024 and OSHW24. See workshops page for more details.
January 8, 2024: Call for Abstracts (SPECIAL SESSIONS) published online.
January 5, 2024: Paper Submission Deadline Extended to January 11th, 2024 (AoE)
October 5, 2023: Call for Papers published online.
Elia Merzari, Ph.D. - Pennsylvania State University
Bio
(click to expand)
Elia Merzari is a professor at the Ken and Mary Alice Lindquist
Department of Nuclear Engineering at Pennsylvania State University. He
also has an appointment in the Department of Mechanical Engineering and
in the institute for Computational and Data Science. He served in
various roles at Argonne National Laboratory between 2009 and 2019. His
expertise covers modeling and simulation of advanced reactors including
safety analysis for a range of reactor types. Since 2019 he is a member
of the faculty at Penn State. He has received several awards related to
these efforts on the area of high performance computing (HPC) including
the American Nuclear Society (ANS) Landis Young Member Engineering
Achievement Award, the American Society of Mechanical Engineers (ASME)
George Westinghouse Silver Medal and the ANS Bal-Raj Sehgal Memorial
Award. He is a fellow of ASME and ANS. He was a finalist for the Gordon
Bell prize for 2023.
Title: First Exascale Flow Simulations of Fission and Fusion Energy Systems
Abstract
(click to collapse)
Advanced fission and fusion energy hold promise as a reliable,
carbon-free energy source capable of meeting the United States'
commitments to addressing climate change. A wave of investment in
fission and fusion power within the United States and worldwide
indicates an important maturation of academic research projects into the
commercial space. Nonetheless, the design, certification, and licensing
of novel reactor concepts pose formidable hurdles to successfully
deploying new technologies. Because of the high cost of integral-effect
nuclear experiments, high-fidelity numerical simulation is poised to
play a crucial role in these efforts.
This talk explores recent pioneering large-scale fluid flow
high-fidelity simulation of fusion energy systems. First-of-a-kind,
full-core simulations of fission reactors have been conducted on
Frontier. Simulations of unprecedented scale have also been conducted
for fusion energy systems. In particular, we model high Reynolds number
flow with heat transfer in the CHIMERA facility designed to study fusion
breeding blankets. Simulations have been performed on Frontier, the
world's fastest supercomputer, with up to 9000 nodes.
The simulations performed are significantly larger than prior work in
our field. We emphasize that only exascale resources make these
simulations possible. We employ NekRS, a GPU-oriented version of the
Nek5000 code, which is a highly scalable open-source spectral element
code for Computational Fluid Dynamics (CFD) simulation. Far from being a
purely academic pursuit, the capabilities provided by NekRS enable
scientific discovery for a range of applications crucial to fission and
fusion energy deployment. We spend the latter part of the talk on this
aspect, emphasizing the importance of bridging the gap between
supercomputing and scientific and engineering practice.
Piero Altoe, Ph.D. - NVIDIA
Bio
(click to expand)
After getting his PhD in Computational Chemistry from the University of
Bologna in 2007, Piero has concentrated on the HPC aspects of
computational sciences. He has worked in various roles, ranging from
designing new hardware solutions to improving application performance.
Piero joined Nvidia as HPC IBD SEMEA seven years ago and later switched
to the DevRel position within the Energy Team. His current role will
focus on building stronger relationships with Life & Material Sciences
application ISVs teams, helping them use new software tools, and
supporting the community's use of GPU resources.
Title: Processing units in the age of AI
Abstract
(click to collapse)
New workloads are transforming the way computing architectures are
designed. AI is the main force behind this change, demanding more
performance from general purpose silicon. NVIDIA is adding new
specialized cores to its GPUs along with more traditional floating-point
units in every new generation. Here I`ll give a brief history of the
recent GPU generations, the software stack that supports them, and the
solutions for the whole datacenter. I`ll also focus on the Material &
Life science use case, where the combination of HPC and AI is enabling
new capabilities in simulation and updating old software packages.
Clustering and Allocation of Spiking Neural Networks on Crossbar-Based Neuromorphic Architecture - by Ilknur Mustafazade, Nagarajan Kandasamy, Anup Das.
Hardware Assist for Linux IPC on an FPGA Platform - by Lars Nolte, Tim Twardzik, Camille Jalier, Jiyuan Shi, Thomas Wild, Andreas Herkersdorf.
The 21st ACM International Conference on Computing Frontiers (CF'24) will take place May 7th - 9th, 2024 in Ischia, Naples, Italy. Participation is in-person only.
Computing Frontiers (CF) is an eclectic, interdisciplinary, collaborative community of researchers investigating emerging technologies in the broad field of computing: our common goal is to drive the scientific breakthroughs that support society.
CF's broad scope is driven by recent technological advances in wide-ranging fields impacting computing, such as novel computing models and paradigms, advancements in hardware, network and systems architecture, cloud computing, novel device physics and materials, new application domains of artificial intelligence, big data analytics, wearables, and IoT. The boundaries between the state-of-the-art and revolutionary innovation constitute the advancing frontiers of science, engineering, and information technology — and are the CF community’s focus. CF provides a venue to share, discuss, and advance broad, forward-thinking, early research on the future of computing and welcomes work on a wide spectrum of computer systems, from embedded and hand-held/wearable devices to supercomputers and data centers.
We seek original research contributions at the frontiers of a wide range of topics, including novel computational models and algorithms, new application paradigms, computer architecture (from embedded to HPC systems), computing hardware, memory technologies, networks, storage solutions, compilers, and environments.
- Innovative Computing Approaches, Architectures, Accelerators, Algorithms, and Models
- Post-exascale computing approaches, designs, and systems
- Novel / emerging processor architectures, memory systems, and communication networks
- Benchmarks, methods, and performance metrics to evaluate innovative computing approaches
- Dataflow architectures, near-data, and in-memory processing
- Quantum computing systems, including algorithms and applications for near-term quantum devices, programming models and compilers, and error correction
- Neuromorphic, biologically-inspired computing, and hyperdimensional computing
- Technological Scaling Limits and Beyond
- Limits: Defect- and variability-tolerant designs, graphene and other novel materials, CMOS alternatives, superconducting logic, nanoscale design, dark silicon
- Extending past Moore's law: 3D-stacking, heterogeneous architectures and accelerators, chiplet packaging technology and its application, distributed and federated computing and their challenges
- Artificial Intelligence
- Large language models and generative approaches
- Deep learning co-processors including architectures, efficient algorithms, chip design, and hardware-software codesign, frameworks and programming models
- Edge deep learning for IoT
- Distributed AI computing for cloud data servers
- Fault Tolerance and Resilience
- Solutions for ultra-large and safety-critical systems (e.g., infrastructure, airlines)
- Hardware and software approaches in adverse environments such as space
- Design for Reliability
- Robust Embedded Software Architecture
- Dependable Computing Architecture
- Dependable system design
- Design for Single Event Effect Hardening
- Modeling, Analysis and Mitigation of Radiation Effect
- Embedded, IoT, and Cyber-Physical Systems
- Ultra-low power designs, energy scavenging
- Physical security, attack detection and prevention
- Reactive, real-time, scalable, reconfigurable, and self-aware systems
- Sensor networks, IoT, and architectural innovation for wearable computing
- Large-Scale System Design and Networking
- Large-scale homogeneous/heterogeneous architectures and networking
- System-balance and CPU-offloading
- Power- and energy-management for clouds, data centers, and exascale systems
- Big Data analytics and exascale data management
- System Software, Compiler Technologies, and Programming Languages
- Technologies that push the limits of operating systems, virtualization, and container technologies
- Large scale frameworks for distributed computing and communication
- Resource and job management, scheduling and workflow systems for managing large-scale heterogeneous systems
- Compiler technologies: hardware/software integrated solutions, high-level synthesis, compilers for heterogeneous architectures
- Tools for analyzing and managing performance at large scale
- Novel programming approaches
- Security
- Methods, system support, and hardware for protecting against malicious code
- Real-time implementations of security algorithms and protocols
- Quantum and post-quantum cryptography
- Computers and Society
- Artificial Intelligence (AI) ethics and AI environmental impact
- Education, health, cost/energy-efficient design, smart cities, emerging markets, and interdisciplinary applications
We also strongly encourage submissions in emerging fields that may not fit into traditional categories — if in doubt, please contact the PC co-chairs by email.
We encourage the submission of both full and short papers containing high-quality research describing original and unpublished work. Papers must be submitted through:
https://cf24.hotcrp.com/
Short papers may be position papers or may describe preliminary or highly speculative work. Full papers are a maximum of eight (8) (excluding references) and short papers are a maximum of four (4) (including references) double-column pages in ACM conference format. Authors may buy up to two (2) extra pages for accepted full papers. Page limits include figures, tables and appendices, but exclude references for full papers. As the review process is double-blind, the removal of all identifying information from paper submissions is required (i.e., cite your own work in the third person). Papers not conforming to the above submission policies on formatting, page limits, and the removal of identifying information, will be automatically rejected. Authors are strongly advised to submit their papers with the final list of authors in the submission system, as changes may not be feasible at later stages.
By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM's new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy.
No-show policy: All accepted papers are expected to be presented in person at the conference and at least one full registration is required from a submission author for each accepted paper. If circumstances arise such that authors are unable to present their papers at the conference, they must contact the PC co-chairs with a proposal for a replacement presenter. A no-show will result in exclusion from the ACM digital library proceedings.
Paper Submission (Deadline Extended): January 11th, 2024 (AoE)
Author Notification: (Extended) February 15th, 2024
Camera Ready: March 25th, 2024 (AoE)
The CF24 Organizing Committee strongly encourages authors on a voluntary basis to present the Artifact Evaluation (AE) documentation to support their scientific results. The Artifact Evaluation is run by a different committee after the acceptance of the paper and does not affect the paper evaluation itself.
Authors may submit the artifact during the submission period or after the notification. To arrange the necessary computing resources, authors are invited to flag the option during the paper registration if they are willing to participate in the evaluation. Authors are encouraged, but not required, to include the AE appendix in the paper at the time of submission. Note that the AE appendix does not count toward the page limit.
CF24 adopts the ACM Artifact Review and Badging (Version 1.1 - August 24, 2020). By "artifact", we mean a digital object that was either created by the authors to be used as part of the study or generated by the experiment itself. Typical artifacts may include system descriptions or scripts to install the environment or reproduce specific experiments. Authors are invited to include a one-page appendix to the main paper (after the references). The appendix does not count toward the page limit.
To prepare the Appendix and avoid common mistakes, authors may refer to the following guide:
https://ctuning.org/ae/checklist.html
A Latex template can be found at the following link:
https://github.com/ctuning/ck-artifact-evaluation/blob/master/wfe/artifact-evaluation/templates/ae.tex.
The Artifact Evaluation Committee will reproduce the paper by following the instructions included in the appendix and verify ACM roles for assigned badges. For example, in order to have a paper with an Artifact Available badge, the code and data should be stored in a permanent archive with a DOI or another unique identifier.
Authors may be invited by the AE Committee to revise their instructions according to their feedback. At the end of the process, AE Committee will recommend one or more badges to assign to the paper among those supported by the ACM reproducibility policy.