TASK 1: Extend the Uintah scientific problem-solving environment to simulate large multi-scale, multi-physics science-based energy systems.

colhead_fireUintah is a general purpose, fluid-structure interaction code has been used to characterize a wide array of physical systems and processes, examples include stage-separation in rockets, the biomechanics of microvessels the effects of wounding on heart tissue the properties of foam under large deformation, and evolution of transportation fuel fires. Uintah was initially targeted at the heating of an explosive device placed in a large hydrocarbon pool fire and the subsequent deflagration explosion and blast wave. The Uintah core is designed to be general and extensible, and so is appropriate for a wide range of PDE algorithms applied to a broad problems class.

Uintah contains three main simulation algorithms:

1) the Arches incompressible fire simulation code,

2) the ICE (in) compressible solver (both explicit and implicit versions), and

3) the particle-based Material Point Method (MPM) for structural modeling.

All these methods have been extended and improved in specific ways for Uintah. In addition to these primary algorithms, Uintah integrates numerous sub-components including equations of state, constitutive models, reaction models, radiation models and so forth. The “full physics” approach used refers to problems involving strong coupling between the fluid and solid phases with a full Navier-Stokes representation of fluid phase materials and the transient, nonlinear response of solid phase materials, which may include chemical or phase transformation between the solid and fluid phases. Also, the multi-material nature of the Uintah MPM-ICE software positions it as a particularly suitable framework for simulating flow through porous media.

Many multi-physics simulations relative to many problems such as CO2 emplacement will require a significant span of space and time scales. Uintah's primary target simulation scenario includes a large-scale fire (size of meters, time of minutes) combined with an explosion (size of microns, time of microseconds). To efficiently and accurately capture this wide range of time and length scales, the Uintah architecture has been designed to support AMR.

In developing tools and environments for advanced simulations it is necessary to look at petascale O(10^15) flops and beyond to proposed architectures, and software approaches for exascale computing over the next decade. One important aspect of this is the implication that petascale systems in a cabinet may appear by 2020. Thus present and future software tools must be able to make effective use of existing petascale architectures such as DoE’s Jaguar (and NSF’s Kraken). At present Utah has considerable experience with the Uintah code in making use of large numbers of cores on such machines.

The implication for software is that it is necessary to rethink existing paradigms along the lines suggested in the DARPA Exascale software report. The proposed Silver model for exascale software is a graph-based asynchronous-task work queue model that must have abstraction for high degree of concurrency, enable latency hiding by overlapping computation and communications, minimize synchronization overheads, support adaptive resource scheduling, and provide a unified approach to heterogeneous processing.

Fortunately, the Uintah software model is exactly of this type and so allows the possibility to show how software may evolve to such architectures. In addition Uintah scales well, even with complex block-structured adaptive meshing applications on the NSF Kraken machine to 98K cores, thus validating the Silver model in a modest way. The performance of Uintah also relies on novel load balancing algorithms and a sophisticated run time system using dynamic and out-of-order task execution. A major challenge with Uintah is to extend the dynamic task-based approach to heterogeneous architectures using, for example, GPUs as nodes.


Over the past three months, the work on Uintah Software has progressed considerably, and it focuses on two main capabilities. First, to ensure that Uintah can continue to run increasingly large problems of application types now and in the future. Second, to extend the applicability of Uintah to new problem classes that are appropriate to energy applications. In the previous report, we mentioned that parts of the code (the data warehouse) were being rewritten to make it possible to run Uintah with less memory per core. The restructuring of the software to run in a mixed threads/MPI model is now complete and has made it possible to reduce the memory that Uintah needs by 90%, effectively increasing the problem size per core by a factor of 10. This work is in the process of being submitted to the 2011 Teragrid Conference.



This development is also an important step in making Uintah work on GPU architectures, which will be essential for future activities. The scalability experiments for this case have been run on the Jaguar and Kraken computers at Oak Ridge as part of the INCITE allocation. These machines have 224K and 112K cores, respectively.


The second area of work is to continue to prepare Uintah for new applications related to the energy field. A number of these applications involve implicit methods for time-stepping and therefore require solutions for large systems of equations. Although Uintah uses state-of-the-art solvers such as Hypre and Petsc from the DOE Labs and have achieved encouraging weak scaling out to 256K cores on Titan, we continue to identify areas where performance improvements are needed.




Plans for next quarter:


Provenance Enabling Uintah

The provenance effort will focus on using the VisTrails ( provenance system to provide provenance capabilities within the Uintah system. Our first scoping meetings are taking place this quarter and will result in a timeline and more specific scope of work.