Printer Friendly

Advances in parallel computing for reactor analysis and safety.

Computer stimulation is recognized as a critical technology, playing a major role in many areas of national interest, such as increasing the efficiency and safety of nuclear reactors, improving the efficiency of car engines or predicting weather patterns and global climate changes. In recent years, rapid changes in computing technology have increased the computing power available to computer stimulation professionals. The advent of massively parallel computers and distributed computing networks has accelerated this trend, which is sure to continue in the foreseeable future. The development of large-scale computational applications needed for the solution of practical problems is beginning, relatively slowly, to take advantage of available parallel computing technology.

The reasons for this slow change in applications include the existence of a considerable investment in software developed for single-processor mainframe computers and a still-limited understanding, among the application developers, of the potential advantages of parallel computing. Many large-scale computer codes, such as those used in thermal/hydraulic and neutronic reactor analysis must be reviewed and partially rewritten to take advantage of the new hardware capabilities. In the process, the capabilities of these codes can be significantly extended and their performance can improve to the point at which parametric studies and real-time simulation using advanced thermal-hydraulic and neutronic computer models become possible.

Several large-scale computational models developed at Argonne National Laboratory (ANL) for the simulation and analysis of thermal-hydraulic and neutronic events in nuclear reactors and nuclear power plants are described here, as well as the impact of advanced parallel computing technologies on these computational models.

Reactor Analysis and Advanced Computing at Argonne National Laboratory

The early involvement of ANL in the design and safety analysis of advanced metal-cooled reactors has led to the development of large-scale thermal-hydraulic and neutronic computational capabilities. Over decades, guided by the results of many in-reactor and out-of-reactor experiments and using the most up-to-date numerical methods, some of these thermal-hydraulics and neutronics codes have become increasingly sophisticated and have achieved both national and international recognition. Codes such as SAS [5, 12], VIM [2] or COMMIX [4] are used in many research institutions, both in the U.S., and abroad for the analysis of thermal-hydraulic and neutronic events in nuclear reactors and nuclear power plants of various designs. The current development of these codes is driven by the needs of the current programs, such as the Integral Fast Reactor (IFR) [13]. This development is strongly influenced by the current trends in advanced computing, in particular the advent of distribution computing and multiprocessor computers. The improved performance allowed by these computing platforms has stimulated the development of new capabilities in the areas of artificial intelligence and system simulation. High-performance versions of the thermal hydraulic and neutronic codes, coupled with these new capabilities, allow the real-time simulation and monitoring of integrated nuclear power plant systems of increasing complexity.

Reactor Systems Simulation Using Distributed Parallel Computing

The SAS system of reactor analysis codes developed at ANL has played an important role in the computer simulation and assessment of energetics potential for both operational transients and postulated unprotected accidents in advanced liquid metal reactors. The postulated unprotected accidents are very low-probability events, which can occur only if the simultaneous failure of several reactor systems, including the reactor protective systems. is postulated. Two major code systems have been developed, the SASSYS [5] code system and SAS4A [12] code system. The SASSYS system's analysis code is capable of analyzing a wide range of transients, from mild operational transients through more severe transients leading to coolant boiling in the core. In addition to a point kinetics neutronics treatment, the code provides a detailed thermal hydraulic analysis of the reactor core, inlet and outlet coolant plena, primary and intermediate heat transport systems, steam generators, and emergency heat removal systems. The code can handle any liquid metal reactor design, loop or pool, with an arbitrary arrangement of components. The main applications of SASSYS have been in the simulation and analysis of operational transients and protected accidents and in the analysis of shut-down heat removal. The SAS4A code system has been designed to simulate the initiating phase of hypothetical severe unprotected accidents, which can lead to core disruption and fuel relocation. During such postulated accident scenarios as the unprotected loss of flow and transient overpower events, a large number of interrelated physical phenomena occuring during a relatively short time interval. These phenomena include transient heat transfer and hydrodynamic events, coolant boiling, cladding melting, relocation and freezing and fuel melting, disruption, relocation and freezing. SAS4A includes detailed phenomenological models of the core, such as the LEVITATE [11] model, which can simulate these complex multiphase, multicomponent hydrodynamic events. Due to the neutronic feedback present in a nuclear reactor, these events can significantly influence the reactor power. The SAS4A code provides an integrated and quantitative framework for examining the behavior of various reactor designs during unprotected severe accident conditions. The SAS4A code can be used in conjunction with the SASSYS code, with which it is fully compatible.

Both SASSYS and SAS4A are large, complex Fortran codes that have been developed for single-porocessor mainframe computers. The running time was always a concern, requiring the use of Cray vector mainframes and imposing limitations on the complexity of the physical models. The advent of high-performance parallel computing systems promises to relieve these problems. Both SASSYS and SAS4A have a structure that contains both high-grain and low-grain parallelism and can benefit considerably from the use of parallel, multiprocessor computing. In order to describe the behavior of the different core regions, the reactor core is subdivided into regions or "channels," each containing a number of fuel assemblies. All the fuel assemblies in a channel are assumed to behave in an indentical manner. The reactor coolant enters each fuel assembly in the lower plenum, flows upward through the fuel assembly, and then exits the assemblyi in the upper plenum. In the metal-cooled reactors analyzed there is no cross-flow between the fuel assemblies, and the channels interact only through the upper and lower plena and through teh neutronics feedback. Thus the channels provide a high-level parallel structure which is a natural target for parallelization.

In order to take advantage of the opportunities for parallelization of large systems codes such as SASSYS and SAS4, efforts have been undertaken to develop a high-performance network computing system that can serve as a platform for research into simulation and control of reactor systems, with emphasis on the IFR. A Reactor Simulation and Control Laboratory (RSCL) has been established at ANL, comprising a network of Sun workstations dedicated to four areas of reactor analysis: development of faster-than-real-time reactor simulation for design and safety analysis; development and testing of control algorithms; validation of control schemes prepared by universities and other laboratories for testing with IFR; and development of advanced pattern recognition techniques to detect the onset of degradation of sensors or reactor components.

Central to the goal of designing and evaluating control systems of IFR and other reactors is the ability to simulate the plant behavior with real-time realistic models in an interactive computer laboratory environment. Toward this goal we have installed a newly available innovative message-passing software package known as P4 [9]. P4 allows large Fortran codes to be parallelized across a network of similar or heterogeneous workstations, allowing fuller utilization of existing desktop workstations and achieving mainframe performance at a fraction of the cost. The SASSYS code is used as the simulation engine for the IFR simulator. In addition to the reactor core, SASSYS simulates the primary and secondary coolant loops, and can model either once-through or recircualting steam generators plus components in the balance of the plant. A control system model based on classical control theory is also available. The code is not limited to just the IFR design; rather, it is applicable to a wide range of reactor designs and types of transients. Although SASSYS is the engine for the full-scope plant simulator, the code used for experimentation with distributed parallel computing was the LSAS (or Little SAS) code, which is a scaled-down version of SASSYS. The LSAS code includes modeling of the reactor core, reactor kinetics, and the heat transport in the primary and intermediate sodium loops. Unlike SASSYS, LSAS does not model a control system or the balance of plant, nor does it include a detailed steam generator waterside model.

Many aspects of the thermal-hydraulic calculations in LSAS and SASSYS are inherently parallel, and the computationally intensive portions of these calculations can be effectively implemented on a loosely coupled distributed-memory network computing system using the P4 message-passing paradigm. For our initial parallelization experimentation, a 19-channel LSAS model of the Experimental Breeder Reactor II (EBR-II) was employed. Domain decomposition was effected by simply dividing core radial channel calculations across the available processors with load-balancing for overhead reduction.

Initial parallelization experimentation focused on making a detailed evaluation of the separate components of both the computation and communication costs. It was quickly learned that the dominant component of the communication overhead cost was the store-and-forward delay associated with packetization of messages. Use of standard Unix virtual-timed mechanisms enabled use to isolate and quantitatively evlaute the SEND-RECEIVE time as a function of message size between 1 byte to 1MB. It was learned that with a standard TCP/IP-based Ethernet, this communication component is a constant 200 milliseconds (ms) for message sizes between 1 and 1,536 bytes. For this reason, our early experimentation with parallelization of LSAS produced the frustrating results that the parallelized versions ran slower than the uniprocessor version and, moreover, the slowdown factor increased with the number of processors used. At 1,537 bytes, the SEND-RECEIVE time drops to only 8.09 ms, therefore effectively reducing the communication overhead by a factor of 25. As the message size increases from 1.5KB to 4KB, the message-passing time increases linearly from 8.09 to 11.9 ms. At 4.001KB the time jumps back up to 200 ms, where it remains constant to 5.5KB.

Although the foregoing behavior was unexpected, it i completely explained by the store-and-forward mechanism within TCP/IP. To enhance overall network throughput, TCP/IP holds small messages for a maximum of 200 ms, attempting to "pack" multiple short messages for a single transmission. Messages greater than 1.5KB are transmitted immediately. When the message size exceeds 4KB, however, the message is broken into two segments, and the shorter segment suffers the 200 ms store-and-forward delay. For the LSAS parallelization all message sizes were less than 1.5KB. A brief script was written and addd to the MASTER and SLVE P4 Proc Groups to simply pad all messages with zeros, making them 1,537 bytes. This reduces the message-passing overhead to only 8.1 ms, which is quite satisfactory for the coarse-grained parallelization objectives of the research reported here.

The LSAS code was successfully parallelized, installed on the network computing system, and run in faster than real time. Use of synchronized interrupts and memory-swapping techniques enabled combining a dynamic, color-graphic user interface (running on a standalone Sun SPARCstation) with the parallelized version of LSAS, so that the integrated system dynamically responds to real-time operator inputs as an operating reactor would. One of the 23 color-graphic screens that can be brought to the foreground of the operator interface is illustrated in Figure 1. The overall control structure for LSAS as implemented on a five-processor configuration is illustrated schematically in Figure 2. This simulator technology will be a powerful design and safety analysis tool of EBR-II/IFR for areas in which human interactions are critical to safety and operability.

Distributed Parallel Computing for Expert Systems

Research and develoment in the area of expert systems (ES) used in the operation of nuclear power plants is an important component of the computer simulation and analysis conducted at ANL. Expert systems can play an important role in improving the efficiency and safety of nuclear power plants by advising the operators about actions required to correct inefficiencies and about developing abnormal situations. The ES must monitor a large number of parameters and provide timely advice to the control room operator, whose response time must be minimized. The structure of ES using artificial intelligence and neural networks makes these applications natural candidates for the use of parallel distributed computing.

An ES has been developed to be used as an operator decision aid during exposed-fuel operation in LMRs. This ES is embodied in a device we call a Failed Fuel Surveillance and Diagnosis (FFSD) apparatus [7]. This device monitors, processes, and interprets information from the Delayed Neutron (DN) system and from other key plant variables and displays for the operator the diagnostic information needed to make proper decisions about technical-specification conformance during breached-fuel operation. This operator decision aids enhances plant availability and economics by minimizing unnecessary reactor trips caused by events having no safety significance. By combining and evaluating information from several plant sensors, the FFSD system makes it possible to significantly relax the conservatism in DN shutdown limits in place at operating LMRs without compromising plant safety assurance. At the same time, the FFSD apparatus reduces complexity and mitigates confusion in the reactor control room. It minimizes the possibility of human error or oversight by providing automatic annunication of discrepant signals o the incipience of initiating faults. More important even when the FFSD expert system is in its "passive" surveillance mode, an interactive capability is provided for the operator to manually query the status of any component of the system for operability validation. This symbiosis of automatic and manual systems reduces challenges to plant availability, while allowingc incorporation of the role of the operator in a way that most effectively augments the achievemen[ of overall plant operability goals. The ES has been validated using archive data from actual breached-fuel tests in EBR-II. A detailed parametric sensitivity analysis has been performed using Monte Carlo simulation to assess propagation-of-uncertainty characteristics for the system's algorithms as a function of both the DN noise level and the levels of various background components that are present in an LMR. Results from this simulation portion of our investigation, analyzed with standard 3D response-surface methodology, show that the output uncertainty associated with the computed diagnostic parameters is +/- 3% during full-power operation with EBR-II's present fission-chamber DN detectors. ES surveillance of diagnostic parameters from the DN monitoring system will enhance plant safety by providing the operator with rapid identification of off-normal operation, thereby enabling the operator to terminate or avoid any events that might challenge safety or radiological performance guidelines.

The completed ES has been parallelized on a loosely coupled network, distributed-memory, computing system that enables the computationally intensive kernel of the ES to run in parallel on a group of Sun workstations. This ES provided an ideal Fortran test code for distributed computing experimentation insofar as it cannot achieve real-time performance on single workstations, and the algorithmic structure of its computationally intensive kernel--interactive solution of large system of couples differential equations--is readily amendable to standard parallel decomposition techniques. Our goal was accomplished with the use of the portable distributed parallel-computing system P4 [9].

For the present project the P4 package was exploited to implement a distributed computational environment on a network of Sun workstations that is suitable for testing and validation of computationally intensive reactor-surveillance ES software. Timing studies have shown that the FFSD software parallelizes very well. We have successfully parallelized the system on rings of up to 10 Sun SPARC-2 processors connected by standard Ethernet. Speedup factors attained ranged linearly from 99% (of theoretical maximum) for 2 processors to 95% for 10 processors. This approach to distributed-memory parallel processing has proved extremely useful and flexible, and will permit the failed-fuel surveillance ES to operate on low-cost hardware with a higher input sampling rate (1/s) than would have been possible on EBR-II's current data acquisition system computer.

Neutronics Calculations and Parallel Computing

The reactor systems simulation codes described in the previous section account for the very important neutronic feedback effects by using simplified neutronics models and neutronE"> cross-sections appropriate for various reactor types. In order to qualify the effects of the approximations used in these simplified models and to calculate the appropriate neutronic cross-sections, rigorous neutronics models have been developed that require very extensive computation.

The most CPU-intensive reactor physics computations are neutron transport calculations, required in situations in which the reactor materials are too heterogeneous and the neutron flux is too anisotropic to permit accurate diffusion theory calculations. The Monte Carlo, nodal transport, or collision probability methods which are applied to these problems often require large amounts of CPU time.

The Monte Carlo code VIM [2] has been used to quantify or understand the effects of the approximations used in other codes. Complex geometries with curved surfaces can be represented very accurately with the combinatorial geometry package. Perhaps more important, finely detailed continuous-energy cross sections have been derived directly from the ENDF/B data and include secondary angular and energy distributions. The spectral influence of neutronically different neighboring materials (e.g., coolant, structural, or absorbing elements) is then accounted for in the flux spectra and spatial distributions which develop naturally during the random walk procedure. This obviates geometric and spectral approximationss of uncertain validity. For example, Argonne has applied deterministic transport methods to a number of fast and thermal reactors, including the Experimental Breeder Reactor II (EBR-II). Because EBR-II has a very small core and core neutron leakage factor of 50%, diffusion calculations underpredict criticality by more than 4%. To make matters worse, interposed between the core and the radial blanket is a steel reflector with highly anisotropic scattering, which introduces a diffusion theory error of about 15% in blanket power. The Monte Carlo VIM code treats the transport effect and anisotropic scattering exactly, but requires about one Sun SPARC-2 CPU-month to estimate accurately reaction rate distributions using straightforward Monte Carlo perturbation calculations. This has motivated the efforts in parallel Monte Carlo described later.

For two decades, VIM has undergone validation against both critical reactors and assemblies, and against Argonne's other high-precision computational methods. The benchmark experiments include a dozen or more plate-lattice critical covering of a wide range of compositions characteristic of liquid metal reactors, light water reactors, heavy water reactors, and gas-cooled reactors, for example. As computing costs decline, VIM has been used increasingly to estimate worths and rates by region or pin, serving as a benchmark for simpler but faster running models. For example, pin powers produced by CASMO [6] were benchmarked against VIM, and advanced BWR pin-bundle designs are being analyzed by utilities using VIM.

To enable the practical use of Monte Carlo estimation of localized quantities in a reactor (e.g., depletion or small worths), work is under way to exploit the statistical independence of individual neutron histories on massively parallel architectures or workstatiwn networks. For code systems like VIM, which have been exhaustively validated over many years, the comprehensive restructuring necessary for implementation on SIMD (and vector) architectures has deterred most Monte Carlo code developers from exploiting these architectures. The arrival of workstation networks and massively parallel MIMD machines with larger distributed memories now permits strightforward parallelization, obviating clever but complicated partitioning by space, event, or energy.

Work on parallel VIM has proceeded in two steps. In an early verion independent multigreen simulations, including input processing, were assigned by the master processor, using the P4 message-passing system , to nodes on a workstation network. Such an approach suffers from inefficiency due to unbalanced computing loads on the worker nodes. Furthermore, this method should not be applied to critically calculations on massively parallel machines because the eigenvalue bias becomes important if the size of generation on one node becomes small. Recently, VIM has been parallelized for critically calculations using a finer-grained technique, in which the master assigns tasks (histories) to the slave processors and performs all overhead tasks. Because the source data is integrated for each generation, the generation sizes are kept large, so the eigenvalue bias is reduced to practical insignificance.

In addition to Monte Carlo neutronics codes, other large neutronics code using rigorous deterministic methods have been developed that can benefit considerably from the use of high-performance computing platforms. GTRAN2 [14], for example, a high-performance general geometry collision probability transport code has been developed for integral neutron transport calculations in thermal reactor lattices where the geometry is irregular. The code performs several tasks: ray-tracking, calculation of the collision probabilities, and solution of the equations for the flux. The calculation of chord length by ray-tracing is performed only once for a specific geometry and reused for each energy group. This calculation has not been parallelized. The collision probability calculations are designed for parallel execution on MIMD shared-memory machines, and the flux solver is vectorized. Figure 3 illustrates a 2D neutron flux distribution in a BWR pin bundle calculated by GTRAN2.

Collaboration with utility companies and universities is now under way to perform physics analyses with Argonne codes, such as VIM and GTRAN2, in Monte Carlo depletion, transient and safety analysis, and fuel management. The use of high-performance parallel computing will allow these rigorous codes to provide useful results within the limited time constraints imposed by the plant operation.

3D Thermal Hydraulic Simulation of Reactor Components and Systems

Argonne Natiobnal Laboratory has had a long history of developing general-purpose 3D thermal-hydraulic codes to provide detailed simulations of a wide variety of nuclear reactor components and systems. These activities have resulted in the development of the COMMIX [4] family of codes, which have evolved in response to the needs of the new reactor technologies and as the capabilities of advanced computer architectures and processing speeds have increased. The COMMIX codes, in continuous development since the mid-1970s, have been designed to answer thermal-hydraulics issues arising in the safety analysis of both commerical and advanced reactor designs.

The codes in the COMMIX series are 3D transient, global analysis models that allow the prediction of temperature, velocity, pressure and phase distribution in separate power plant components (e.g., heat exchangers or reactor plena) or in a gourp of coupled components such as a reactor primary system. They are based on a porous media formulation that allows the modeling of flow effects caused by complex internal geometry through the use of distributed and directional porosity, distributed flow resistance and local sources and sinks. The full conservation equations of mass, momentum and energy are solved as a boundary value problem in space and as an initial value problem in time. Although all the COMMIX versions have the capacity to solve most thermal hydraulic problems that could arise in the design and safety analysis of reactor systems, they have been individually tailored to some subclass of problems for the sake of general efficiency. For example, COMMIX-1AR/P [3] is an extension of COMMIX, which allows the treatment of multiple fluids, include pumps, allows radiation heat transfer between submerged solids and provides a more efficient numerical treatment. COMMIX-1D, now under development, contains many special features relating to the treatment of air, steam, and hydrogen in a reactor containment vessel. Anwther specialized version, COMMIX-PPC, is being developed specifically for the analysis of power plant condensers and shell-and-tube heat exchangers.

The use of parallel computational platforms is of significant interest for the COMMIX codes, which have always placed extreme demands on computer resources. This trend is expected to continue as detailed studies of the transient behavior of the next generation of LWRs become necessary. By their very nature, such evolutionary and revolutionary designs have greatly extended the length of reactor transients and, in consequence, placed severe demands on computational resources. Passive safety features that involve transition to natural convection and subsequent decay heat removal by natural circulation will pplace extreme demands on thermal hydraulic simulation. This is an area in which the COMMIX codes should prive invaluable, since many of the calculations done in the past have involved buoyancy-driven flows, the study of flow stability and stratification, and other situation in which the driving forces are created by small temperature variations. In addition, such calculations have been extensively validated by comparison with experiments such as those performed by Kasza et al. [10].

In order to study the advantages of multiprocessor computing, a shared memory parallel implementation of the COMMIX-1AR/P thermal-hydraulics computer program was developed as part of the Advanced Software Development and Commercialization (ASDAC) project, which involved computer scientists and engineers at the University of Illinois at Urbaba-Champaign and ANL [1].

The goal of the project was develop a production version of the program that would run on a shared-memory multi-processor Cray X-Mp/Y-MP-class computer while preserving the functionality of the production program. It was recognized that many person-years of effort has been invested in the development of COMMIX and that commercial benefit would be best achieved by preserving the basic functionality and feel of the code. For this reason efforts connected on modifying existing Fortran code and did not attempt to develop a new prototype for COMMIX.

The code was ported to the various target computer systems, and several data sets that would exercise critical sections of the COMMIX program were selected. These data sets were then employed to establish baseline performance of the original ported code and in subsequent evaluation of restructured codes. Performance analysis tools were used to identify critical code segments that would be likely candidates for automatic, manual restructuring and/or algorithmic revision. The process was repeated several times during the optimizatiwn process. Initial studies were performed with several small data sets. This type of data set provides detailed timing information that is used to locate critical code segments and measure performance. An industrial-strength data set was obtained from Commonwealth Edison used in the latter stages of the project. This data set was used to determine the temperature distribution within a nuclear reactor vessel following a PWR steam line break accident and is described in detail in the following paragraphs.

The PWR steam line break accident was initiated by the rupture of one of the main steam lines, typically at hot zero power conditions. This resultd in a rapid cool-down of all steam generators via cross-connect steam lines until main steam isolation valve closure. This was followed by an extended blowdown of a single steam generator producing an asymmetric temperature distribution in the primary system. COMMIX is used to predict this temperature distribution, which is then used to derive the power distributions for use in Departure-from-Nucleate-Boiling calculations. The COMMIX model included the cold leg entrance into the downcomer, the downcomer, the lower plenum, and the core. These were modeled using a cylindrical geometry with 13,104 computational cells. Internal cells were used in the model to stimulate the core walls and the curvature associated with the bottom of the vessel. Figure 4 shows an isothermal surface of the cool plume of water entering one of the four legs of the reactor, as calculated by COMMIX.

The colder water flows into the inlet, down through the downcomers, and rises in the core, where it will eventually exit at the top. Although COMMIX is a mature code that has been ported to a variety of computer architectures it was necessary to develop a portable version of the code that would compile on Sun workstations, the Alliant FX/80, and the Cray Y-MP. A set of benchmark calculations was run with the original code on each of the machines to establish baseline performance and to locate critical code segments. The results of our baseline benchmarks were similar for the Cray and Alliant. The bulk of the time was spent in routines that were constructing algebraic equations for momentum equations and energy equations (62%) or in routines solving these algebraic equations (30%).

Early experience compiling the original code to run concurrently on the Cray X-MP/48 or the Alliant FX/80 was not startling. Both the Cray and Alliant compilers have built-in features that recognized parallel constructs and interpret compiler directives to generate concurrency, but a penalty has to be paid for communication between processors. The original code slowed down by a factor of 0.4 on 4 processors of the Cray X-MP/48 and was sped up for a factor of just 1.2 on 8 processors of the Alliant FX/80. There are many reasons for this poor performance. On the Cray much of the time was spent waiting for the processes to synchronize. The performance of a single processor of the Cray much of the about 24 times faster than that of the Alliant FX/80. For that reason the penalty in terms of time waiting for system resources and communications between processes is a much larger percentage of the computation time on the Cray.

Most of the success at optimizing COMMIX was the result of rewriting and reorganizing the most computation-intensive subroutines. This hand-restructuring is intensive and time consuming. Examination of the routines that construct algebraic equations for the momentum and energy equations revealed complex logic and indirection that impleded vectorization and autotasking. A single integer array was employed to encode several types of boundary conditions and its values were used as a pointer if its sign was negative. The use of arrays as pointers is called indirection. Indirection is often effective, but current vectorizing compilers can only optimize one level of straightforward indirection. Rewriting these routines with separate loops, flags and pointers for the various types of boundary conditions resulted in a speedup factor 7.5 on the Cray Y-MP for the construction of the energy equation.

Studies of concurrent performance wer made with partially optimized code. The curves labeled with the superimpose asterisk in Figure 5 show the concurrent performance of the code on the Cray Y-MP/48 and the Alliant FX/80 after the restructuring of the routines that construct the energy equations. The performance improves only slightly on the Cray and there is a speedup of about 2.2 on the Alliant. The performance was improved of about 2.2 on the Alliant. The performance was improved on both the Cray and the Alliant when the X-, Y-, and Z-momentum routines were restructured on run concurrently. The speed-ups for 4 processors on the Cray and 8 processors on the Alliant were 2.0 and 3.0 respectively. It is worth noting that the curves for the Alliant tend to flatten with increased numbers of processors.

In the concurrent mode, the restructured COMMIX spends 30% of the time constructing equations on the Cray and 10% of the time constructing equations on the Alliant. The bulk of the time is spent solving algebraic equations. This is ain contrast to the situation before restructuring, when the bulk of the time was spent constructing the equations.

Much of the effort on COMMIX has concentrated on improving the subroutines that construct equations. These algorithms involve local calculations and nearest-neighbor communications. For that reason they lend themselves well to vectorization and should be scalable on massively parallel machines. In our studies we have refrained from changing the overall program organization of COMMIX and the equation-solving routines. Reorganization of the program and underlying data structures to enable domain decomposition would provide a mechanism to achieve further improvements in performance. New scalable methods are becoming available for solving algebraic systems on massively parallel computer systems [8]. These solvers have exhibited GFLOPS (billion floating-point operations per second) on the 528-processor Intel Delta. The performance of COMMIX running the Commonwealth Edison Problem on the Cray Y-MP is about 72 MFLOPS (million floating-point operations per second). For this reason it is reasonable to expect a variant of COMMIX to run several hundred times l faster on the current generation of massively parallel computer systems.

Conclusions

Over the years, several families of large thermal-hydraulic and neutronic codes haeve been developed at ANL, providing a detailed computer simulation of the complex phenomena that occur in nuclear reactors and nuclear power plant systems. These codes have achieved national and international recognition and have been used extensively in the design and analysis of many types of reactors. Work is currently under way to restructure these large computational tools to allow them to take advantage of the capabilities of distributed computing networks and massively parallel computers. The advent of high-performance parallel computing holds the promise of dramatically increasing the performance of computer simulations, allowing the use of complex thermal-hydraulics and neutronic models in real-time simulations and parametric studies for the design and safety analysis of nuclear power plants.

The adaptation of the existing, detailed thermal-hydraulic and neutronic codes on high-performance parallel computational platforms makes possible the ambitious goal of a Numerical Nuclear Power Plan, which would simulate in detail the response of the plant, including the reactor, to specified transients. The Numerical Nuclear Power Plant is a central element of the Nuclear Power Analysis Center currently being organized by a consortium which includes Argonne National Laboratory and a group of Universities and Utility Companies.

References

[1.] Aslam, S., Gallopoulos, E., Ham, M., Canfield, T., Minkoff, M., and Blomquist, R. Experiments in thermal hydraulics simulations: Multiprocessing COMMIX. In Proceedings of the Fifth SIAM Conference on Parallel Processing for Scientific Computing (1992).

[2.] Blomquist, R.N. VIM. In Proceedings of the International Topical Meeting on Adances in Mathematics, Computations and Reactor Physics (Apr. 28-May 2, 1991), Pittsburgh, Pa.

[3.] Blomquist, R.N., Garner, P.I., and Gelbard, E.M. COMMIX-1ARP: A three-dimensional transient single-phase computer program for thermal-hydraulic analysis of single and multicomponent systems. ANL-90/45, Argonne National Laboratory, Argonne, Ill. (July 1991).

[4.] Domanus, H.M., et al. COMMIX-IC: A three-dimensional transient single-phase computer program for thermal-hydraulic analysis of engineering systems. NUREG/CR-5649, Argonne National Laboratory, Argonne, Ill. (Nov. 1990).

[5.] Dunn, F.E., Prohammer, F.B., Weber, D.P., and Wilim, R.B. The SASSYS-1 LMFBR systems analysis code. In Proceedings of the International Topical Meeting on Fast Reactor Safety (Knoxville, Tenn., Apr. 1985).

[6.] Edenius, M., et al. The physics model of CASMO-4, Advances in mathematics, computations, and reactor physics. In Proceedings of the International Topical Meeting (Pittsburgh, Apr. 28-May 2, 1991).

[7.] Gross, K.C., Hawkins, R.E., and Nickless, W.K. Operator decision aid for breathed fuel operation in LMRs. Trans. Am. Nucl. Soc. 63 (1991), 118-119.

[8.] Jones, M.T., and Plasmmann, P.E. An efficient parallel iteractive solution for large sparse linear systems. In Proceedings of the IMA Workshop of Sparse Matrix Computations: Graph Theory and Algorithms. Published by Univ. of Minnesota, Minneapolis, 1991.

[9.] Liebfritz, D.W., Gross, K.C., and Nickless, W.K. P4-A portable message passing system for distributed parallel computing in Fortran. International Topical Management on Advances in Mathematics, Computations, and Reactor Physics, Vol. 4 (Apr. 1991), pp. 20.2 3-1--20.2 3-4.

[10.] Oras, J.J., Kasza, K.E., and Kuzay, T.M. Overview of the ANL Advanced LMR System Thermal-Hydraulic Test Program Supporting GE/PRISM and RI/SAFR. In Proceedings of the third International Symposium of Refined Flow Modeling and Turbulence Measurements, International Association for Hydraulic Research, Tokyo, July 1988.

[11.] Tentner, A.M., and Wider, H.U. LEVITATE-A mechanistic model for the analysis of fuel and cladding dynamics under LOF conditions. In Proceedings of the International Meeting on Fast Reactor Safety Technology (Seattle, 1979).

[12.] Tentner, A.M., et al. The SAS4A LMFBR Whole Core Accident Analysis Code. In Proceedings of the International Topical Meeting on Fast Reactor Safety (Knoxville, Tenn., Apr. 1985).

[13.] Till, C.E. The IRF Status and Prospects. Trans. Am. Nuclear Soc. 66 (Chicago, Nov. 1992), 351.

[14.] Vujic, J.L. GTRAN2: A General Geometry Transport Theory Code in 2D. In Proceedings of the International Topical Meeting on Advances in Mathematics, Computations, and Reactor Physics (Pittsburgh, Apr. 18-May 2, 1991).
COPYRIGHT 1994 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1994 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:High Performance Computing; research underway at Argonne National Laboratory and other facilities for nuclear power plant simulation, improvements under parallel architecture
Author:Tentner, A.M.; Blomquist, R.N.; Canfield, T.R.; Garner, P.L.; Gelbard, E.M.; Gross, K.C.; Minkoff, M
Publication:Communications of the ACM
Date:Apr 1, 1994
Words:5927
Previous Article:LinkWinds: interactive scientific data analysis and visualization.
Next Article:Measuring usability: preference vs. performance.
Topics:

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters