Ember code components represent highly simplified communication patterns that are relevant to DOE application workloads.
Sandia National Laboratories
C and MPI/SHMEM
Spack Package Name
Multi-node communication patterns underpin the scalability and parallel performance of the Department of Energy, and broader HPC workloads. Modeling of these patterns is as important aspect of extreme scaled supercomputing systems. To date, many vendors have relied on communication traces which can be difficult to obtain at scale, and take significant I/O storage. For interconnect simulators, the reading and replay of traces requires high-performance I/O subsystems which are often expensive and may be unavailable. To this end, the Ember suite provides communication patterns in a simplified setting (simplified by the removal of application calculations, control flow etc.). This enables more efficient traces to be captured, or in the cases of the Structural Simulation Toolkit (SST, http://sst-simulator.org), these patterns can be easily replicated without tracing using the Ember/SST motif library. The intention of Ember is to enable much larger-scale modeling of high-performance interconnects to achieve DOE's goal of scalable Exascale computing systems. The motifs contained in the suite are intentionally simplified, and by design, do not capture every permutation of the basic patterns within the DOE workload. When used collectively, our experience working with leading industry vendors has been that the motifs capture pertinent aspects of the network interconnect. Ember code components represent highly simplified communication patterns that are relevant to DOE application workloads. In most cases, the patterns are parameterized to allow for the study of sensitivities to message sizes, message rates, rank placement etc. The initial implementation of Ember was written within the SST simulation framework to permit evaluation of the communication patterns within the simulator removing the need for large trace files. In order to allow for these to be run outside of SST, these patterns are now being extracted and written in MPI or SHMEM so they can be used with a broader tool portfolio and run on supercomputing platforms.