Printer Friendly

The case of the Business Systems Modernization: a study of a successful MAIS partnership.

Successful implementation of a major automated information system acquisition program requires different organizations with seemingly distinct needs, expectations, and goals to work together to reach a common goal--namely a better tool that helps users accomplish their missions. A MAIS acquisition program is an automated information system whose cost in any single year is in excess of $32 million, has a total program cost in excess of $126 million, has a total life-cycle cost in excess of $378 million, or has been designated by the Milestone Decision Authority as a special interest program--with all costs based on the fiscal year 2000 equivalent dollar. Although implementing a MAIS involves numerous stakeholders, three organizations in particular--the program management office (PMO), the designated operational test agency (OTA), and the Office of the Director, Operational Test and Evaluation (DOT & E)--must work especially close to bring the system to operational form.

[ILLUSTRATION OMITTED]

Different Perspectives

Each of these organizations might have a different perspective on how schedule, cost, and performance tradeoffs should be managed, and these differences need to be understood and addressed. From the PMO perspective, the OTA often seems to slow the program down and adds time and money because of its desire to perform operational test and evaluation beyond the developmental test and evaluation, which generally is performed only to satisfy developmental requirements. The PMO might view DOT & E as a bureaucratic oversight organization whose sole purpose seems to be prolonging the acquisition process.

On the other hand, the OTA might think the PMO has failed to demand sufficiently robust developmental test and evaluation, so the OTA might find problems during operational test and evaluation that should have been discovered in the developmental test and evaluation stage, making the operational tests last longer. The OTA might feel that DOT & E sometimes dictates too many of the testing details, especially in milestone-related documents like the Test and Evaluation Master Plan.

Finally, DOT & E's perspective of the other organizations might include the belief that the PMO is too willing to sacrifice performance in order to keep cost and schedule in check and, thus, can't be trusted to do things right. As for the OTAs, DOT & E might think that although they try hard, OTAs need firm guidance and assistance to successfully plan and execute operational tests.

What this dynamic usually yields is three organizations with unique motivations and perspectives working together grudgingly because they have to, not because they want to.

But it doesn't have to be this way. These organizations do not need to be natural antagonists. They can be cooperative partners moving toward a common goal--namely to provide better tools for the warfighters. But how can the organizations break down barriers and foster cooperative relationships that best serve the warfighters and their support staffs? We can answer this question using a recent successful acquisition as the model.

The Case of the Business Systems Modernization Tool

In the late 1990s, the Defense Logistics Agency, headquartered at Fort Belvoir, Va., began an ambitious replacement of their legacy accounting, order processing, and billing systems by a new tool called Business Systems Modernization, or BSM. Because of the costs associated with the implementation of BSM, it was declared a MAIS program and placed under DOT & E oversight. OTA responsibilities were assigned to the Joint Interoperability Test Command (JITC) at Fort Huachuca, Ariz. The Washington Operations Division of JITC was also assigned to perform interoperability analyses and provide recommendations to the Joint Staff regarding interoperability certification of the system.

After successfully completing the developmental test and evaluation as well as performing the necessary business process re-engineering to adopt the business practices provided by the enterprise resource planning software, BSM was awarded Milestone C in 2002. The core BSM system was approved for limited fielding to about 400 DLA employee users.

At that time, the DLA program management office was convinced that, since the developmental test and evaluation had indicated no problems with the functionality of the software, operational testing would be a simple verification that all was well. The first increment for BSM was tested by JITC in late 2002. Unfortunately, following the testing, DOT & E determined that BSM was not operationally effective or suitable to support DLA's mission based on the operational performance criteria determined by DLA. Operational effectiveness is the overall degree of mission accomplishment of a system when used by representative personnel in the planned environment. Operational suitability is the degree to which a system can be satisfactorily placed in field use, with consideration given to reliability, availability, maintainability, compatibility, interoperability, information assurance, safety, human factors, manpower supportability, logistics supportability, documentation, and training requirements.

For many programs, DOT & E's negative assessment would have been followed by intense disagreements between the PMO (who would suspect that the operational testing was flawed), the OTA (who would argue that developmental test and evaluation should have caught and fixed the problems discovered in operational testing), and DOT & E (who would feel that more oversight would be needed to make sure the system eventually worked the way it should). Those arguments didn't happen. Instead, the DLA program manager, who observed much of the operational test and evaluation, agreed with both the JITC and DOT & E assessments and immediately devised a plan to correct the deficiencies found during the testing.

An Open, Three-Party Relationship

The next thing that the DLA PMO did was to institute a continuous dialog with JITC regarding the operational test and evaluation schedule and scope. The program manager also instructed the PMO staff to be open with JITC and DOT & E about issues affecting the program, whether the issues were directly related to testing or otherwise. The bottom line was that from that point on, there was total transparency between these organizations regarding the state of the program.

JITC responded to this new relationship by working hand in hand with the PMO to help refine system requirements that were either ill-defined (not testable) or no longer needed because they were holdovers from legacy business processes not applicable to BSM. Recognizing that BSM requirements were now stable, and with the PMO displaying exceptional acquisition discipline, DLA was allowed to make minor changes to the approved operational requirements document without going through a formal and time-consuming change processes. This expedited the communication between the users, program office, and testers to ensure all were on the same page regarding expectations.

The Second Round of Testing

The initial operational test and evaluation of the modified BSM was successfully conducted in late 2004, with the system determined by DOT & E to be operationally effective and potentially suitable. However, there were some issues found in the areas of system usability and training. The PMO and the DLA Program Executive Office embraced the changes recommended by DOT & E in these areas and modified the user interface and training plan accordingly.

Following the initial operational test and evaluation, a major revision to the software was released and operationally tested by JITC in seven separate test events over the course of two years (instead of one large, all-encompassing test after the last release) to ensure that each rollout met user needs and was operationally effective and suitable. The benefit of this testing approach was that it allowed issues to be addressed quickly so the PMO could make course corrections if needed.

The effective communication established after the first test event in 2002 continued through this final round of testing as well. The PMO, DOT & E, users, and JITC engaged in frequent teleconferences during and after each day of testing to ensure that all stakeholder questions were addressed in near real time.

How'd They Do It?

Some obvious questions to ask are "what worked?" and "why?" Let's look at the answers:

* DLA leadership recognized the importance of the operational test and evaluation after BSM did not meet operational performance test criteria in the first test in 2002. Their response was to acknowledge system issues rather than argue with testers, and to institute corrective actions for those issues.

* There was continuity in the personnel involved. The JITC test director had many years of experience with operational testing, and this same person was involved throughout all of the operational testing and evaluation. The original program manager for BSM maintained involvement in the program after being assigned as the DLA program executive officer. The DOT & E action officer originally assigned to monitor BSM provided oversight from the program's beginning to end. This continuity of personnel added stability and constancy to the acquisition and operational test and evaluation processes, and it gave the PMO confidence that they would get the same answer tomorrow as they got today.

* DOT & E provided oversight, not micromanagement. DOT & E recognized it was dealing with professionals who should be treated as such, and who might need advice but not dictation.

* DLA recognized the importance of organizational change management and the need to reorganize to accommodate the business processes that come with the enterprise resource planning solution--the true evidence of business process re-engineering. This change brought the users on board as true partners in the acquisition, not as mere recipients of the software. The authors all agree that implementing an ERP system that crosses an entire organization is daunting and requires not only completely replacing the system, but transforming the business processes and the way the organization operates. Nearly everyone's job is impacted, so the users need to be a part of the transformation, not have it imposed on them.

Another question to ask is "how do we bottle the BSM success?" While it is true that some of the success was due to the people who were in various positions at the three organizations, some aspects of the BSM success were independent of the personnel.

* The BSM system was fielded in small, manageable increments with a well-defined rollout plan rather than in large blocks of capability and/or users. This allowed the PMO to better manage the expectations of users (since the users knew when they would get the tool), and to better facilitate test planning, conduct, and reporting.

* The PMO and JITC used a DOT & E policy, "Guidelines for Conducting Operational Test and Evaluation for Software-Intensive System Increments," to determine testing requirements for limited initial system deployments--both before and after initial operational test and evaluation--to help scope an adequate test to identify operational issues while minimizing test resources and speeding up reporting and feedback.

* The PMO used operational test and evaluation results to make changes to the program acquisition plan rather than ignore the results. This is what testing is supposed to do. It should be a learning tool for all stakeholders to provide a better system for the user.

The authors feel that the success of the BSM acquisition can be replicated in other MAIS programs, especially with ERP acquisitions, if the following basic tenets are incorporated into the program test and acquisition plans:

* DOT & E and the OTA should engage the PMO in the test and evaluation planning of a program early in its development cycle so all parties can work together to devise the most effective test-and-evaluation strategy.

* Whenever possible, the program should be developed and fielded in small increments and provided to a limited number of users for mission accomplishment and for operational assessment purposes. When the functionality provided by these small increments reaches a critical mass (in terms of both user base size and overall system capability), the OTA should conduct initial operational test and evaluation.

* The PMO should use the results of the operational test and evaluation to provide course corrections and system changes to improve the performance of the system in support of the full fielding decision.

* Program managers should be encouraged to adapt to evolving user needs, even if it means schedule adjustments and acquisition program re-baselining. Leadership should reward program managers' decisions to be flexible instead of penalizing them. Moving ahead with an acquisition approach just to stay on schedule or within budget may not deliver what the user needs and will cost more in the long run.

* For ERPs and other programs that require business process re-engineering to be successful, user organizations should demonstrate an executable BPR plan prior to granting Milestone C. Fielding such systems with only "trust me" as evidence is a recipe for failure.

While some in the acquisition and testing communities might view the early BSM program results as less than successful because of failed tests and cost and schedule adjustments, the lessons learned from those early results were incorporated in the successful program plans that moved forward. The fact that the user community ultimately benefited from an operationally effective and suitable system, implemented during and successfully continuing in a wartime operations tempo, is, in our opinion, money and time well-spent.

The authors welcome questions and comments and can be contacted at david.falvey@dla.mil, austin.huangfu@osd.mil, and dcarlson@ida.org.

Falvey is the program executive officer of the Defense Logistics Agency and is a career logistician and IT program manager. Huangfu is a staff assistant for net-centric systems at the Office of the Director. Operational Test and Evaluation. Carlson supports DOT & E as project leader for major automated information system test and evaluation analysis at the Institute for Defense Analyses.
COPYRIGHT 2008 Defense Acquisition University Press
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2008 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:TEST AND EVALUATION
Author:Falvey, David J.; Huangfu, Austin T.; Carlson, C. David
Publication:Defense AT & L
Date:Mar 1, 2008
Words:2204
Previous Article:Opportunity management: be careful what you ask for.
Next Article:Krog's new weapon: reality is a special case; Can anyone tell me why we're building this thing?
Topics:


Related Articles
SIGCOM, Inc.
BCF-209 revised in fiscal 2004.
Air Force print news (Nov. 9, 2005): Defense Acquisition Management Information Retrieval Web site.
Army News Release (Aug. 23, 2006): army reaches milestone in FCS modernization program.
Army News Service (Oct. 26, 2006): FCS Opens Test Complex at White Sands Missile Range.
Singapore Courts Welcome Abu Dhabi Judicial Delegation.
Singapore Courts Welcome Abu Dhabi Judicial Delegation.
Singapore Courts Welcome Abu Dhabi Judicial Delegation.

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters