DOD legacy systems; reverse engineering data requirements.
Over the years this base of information systems (IS) has been continuously modified to implement changing needs including functional requirements, business rules, and data architectures. While few currently satisfy DoD-wide needs, some of these systems satisfy many dapartment-wide requirements and most contain invaluable information.
DoD has recognized this problem for a long time. Drawing from DoD's primary mission--war fighting--lessons learned in a joint command post exercise in the early 1980s demonstrated the need to integrate the computer systems used to support future joint exercise and especially joint operations. The Required Operations Capability (ROC) document produced to initiate this project initially called the integrated systems the Joint Operations Planning System (JOPS). When the execution dimension was added in the late 1980s, JOPS became the Joint Operations Planning and Execution System (JOPES).
The importance of the data dimension of JOPS/JOPES integration was recognized from the beginning. All of the efforts to capitalize on data requirements in JOPS/JOPES collective systems and files could be considered "reverse engineering data requirements" from legacy systems: the challenge we have before us today. However, at the time there was no formal methodology or framework to follow.
The first data requirements product of the JOPES ROC was a data requirements document (DRD) report. With the emergence of data modeling, it was determined that the DRD should be converted into a model. This initial modeling effort resulted in a hybird logical/physical data model called a "logical database design" (LDBD) developed during 1986 and 1987. The LDBD represented approximately 2,000 data elements, all carefully traced from physical systems and files to the model. These traced or mapped data requirements have been and are currently documented in the WISDIM (War Fighting and Information Systems Dictionary for Information Management) data dictionary.
The next step for JOPS/JOPES was to remove the physical characteristics (schema characteristics) of the logical database design into a technologically independent logical data model (LDM). The first JOPS/JOPES LDM was published in 1990. Each year a refined version has been released. The 1992 version of JOPES LDM contained 468 entities and 1,994 data elements; the 1993 version had 362 entities and 1,318 data elements. The 1992 and 1993 versions have been used extensively for projects sponsored by the Corporate Information Management Initiative. One of the efforts supported by JOPES LDM was the development of the "starter set" of DoD data elements (published in December 1993) to "jump start" the DoD data standardization program. The 1993 JOPES LD Myalso serve as the analysis baseline for the development of the Initial Global Command and Control System (GCCS) data architecture (published in January 1994). This architecture will guide the migration of current JOPES systems and files from a mainframe to a client-server environmemt. It will also guide the integration of additional systems and files designated as a part of the CGGS required functionality.
Data architectures are developed to guide the development and evolution of both data models and databases. IEEE Standard 610.12 defines an architecture as the "structure of components, thier interrelationships, and the principles and guidelines governing their design and evolution over time." In order to comprehend the purposes of data architectures, it is important to understand the logical and physical dimensions (see Figure 1.) Details of othet DoD architectural development projects can be obtained from the authors upon request.
Based on this definition, and information engineering principles , the DoD has developed a method of maximizing the results of data reverse engineering projects. The structuring of data components based on their dependency relationships in logical data models produces an optimal order to follow in system and database design. As applied to Gccs and other DoD systems migration projects, the principals and guidelines of this methodology apply to the design/development of:
* data models (conceptual or logical dimensions)
* transformation models (intermediate models between logical and physical dimensions)
* physical design and schemas (physical dimensions)
* database and applications development
* data/system migration plans
* data/system testing plans
* data/system implementation plans
The Center for Information Management/Data Administration Program Management Office (CIM/DAPMO) recognized teh need to more efficiently recover useful data requirements embedded in legacy systems and files. DAPMO recognized the potential of applying reverse engineering analysis and technologies to recover and reuse functional requirements, and potentially, system components, to develop an information base for data migration planning. CIM/DAPMO initiated a project to develop a reverse engineering framework (consisting of procedures, method, and a tool set) and to validate the framework in a series of reverse engineering projects using prominent DoD legacy information systems. The project objectives were to:
1. develop a systematic framework for reverse engineering database and data structures to recover business rules, domain information, functional requirement, and to use these data products to develop normalized logical data models, construct data architectures, and document data requirements;
2. validate and refine the framework in a DoD-specific context by reverse engineering a numner of DoD legacy systems;
3. develop a means of scoping and estimating future data reverse engineering project costs; and
4. developed and capture useful metrics to assist decision makers in determining the economic feasibility of future reengineering, reverse, and forward engineering efforts as the DoD begins the process of consolidating and modernizing its inventory of IS.
Figure 2 represents an overview of the DoD reverse engineering concept, showing how we extract logical data models from multiple migration systems, databases and files to reduce redundancy, maintenance, and inconsistencies in our current systems. This supports data standardization and data migration. As we validate the business rules contained in the systems data management system, and applications code, we trace the current multiple data representations (with and among systems) to the single-concept data standard necessary in the DoD to integrate data requirements.
Impact of legacy Environment complexity
Operational complexity. Historically, DoD has allowed worldwide subordinate-level organizations to maintain separate, functionally duplicative systems supporting operational requirements for various DoD components such as: the Army, Navy, Air Force, Marine Corps, Joint Chiefs of Staff, United and Specified Command, and Agencies. To collect, analyze or process data at higher organizational levels, management collected the data from lower levels. The collection process depends on subordinate organizations feeding information upward --often manually. The format of the upward feed must be meticulously specified at each level. Outside of limited specific mission-critical instances, the consistency and accuracy of data flows has been practically impossible to maintain and control. The impact of these reverse engineering projects includes the following:
1. The inventory of physical evidence is difficult to collect and analyze. Even is documentation exists, it is often outdated or of poor quality. In addition, personnel with the required knowledge are often no longer available.
2. The existing interface data elements defined for transferring data instances do not represent completely identified, sharable data structure and semantic requirements among systems. To obtain a complete picture of the data sharing requirements, reverse engineering analysis must determine not only how the interface data elements were generated and used among systems, but also identify other noninterface data elements that are synonymous among systems and are therefore sharable.
3. Even though a specific system group of legacy systems in a functional domain, the implementation will not be successful unless unique critical requirements of the legacy system are identified, documented and addressed by the replacement system.
Technology complexity. The current DoD legacy systems inventory includes obsolete electronics, technology, and systems designs--up to 30 years old and poorly documented. Characteristics of the five systems reverse engineered in phase 1 of this project are shown in Table 1. Many of these decades-olds designs are publishing the limits of their engineered capabilities, creating reliability and maintainability problems. Moreover, they cannot be readily adapted to open architectures and current technologies. Systems designated as migration systems capable of serving the entire DoD now have added functional and technical requirements.
For instance, one of the systems uses application program-managed memory overlays. This originally was innovative use of flat-file technology, using a table-driven approach to separate process from data--in the same manner that database technology does today. However, a fixed record length limits the number of fields available to accommodate growing user requirements. Currently this system is using the same field for multiple uses with different meanings, depending on the user group needs. Allowing individual user discretion rather than enterprise standards permitted a restrictive physical limitation to temporarily serve more customers, but this approach is reaching its design and operational limits.
Administrative complexity. In most areas in the DoD, systems exist at Stage 3 "control" of Nolan's framework work describing organizational information processing maturity . Nolan stated, "In the stages of control and integration, the dominant forces have to do with organizational discipline and don't relate very closely to technology." The coordination, negotiation, management approval, and buy-in processes for reverse engineering are currently expectionally complicated and time consuming, but critical to the success of reverse engineering projects. This is due to the existence of numerous system "stakeholders" (e.g., planners, owners, builders, maintainers, existing and future users.) Table 2 illustrates the variety and number of people who required coordination and briefings for a single reverse engineering project. In this example, we were forced to coordinate with points of contact from 11 different organizations. We used these "Client Mazes" to track project communication. The impact of administrative complexity on reverse engineering projects includes:
1. The strategic objectives of the system with respect to each system stakeholder must be identified and prioritized. The reverse engineering objectives and priorities must be synchronized with and agreed to by the stakeholders.
[TABULAR DATA OMITTED]
2. Without coordinated "buy-in," reverse engineering projects cannot be successful. High-level management approval is necessary but not sufficient. Mid-level management and systems personnel must also understand and support the reverse engineering project objectives or else "rice-bowl syndromes" may jeopardize project success.
3. The negotiation, planning, and buy-in processes must be done "before" the project starts.
Figure 3 depicts the DoD's approach [4,5] to eliminate redundant systems. Functional steering committees select "migration" systems from the existing IS inventory. Migration systems should implement the majority of functional and data requirements for a class of legacy systems having the same, similar, or overlapping information and/or domain. The migration systems eventually become functional area "target" systems by:
* adding the remaining essential functional requirements for other legacy systems--before phasing them out
* separating data from process
* incorporating DoD standard structures for data reuse and data sharing
Most of the legacy systems were developed without the process models or data models--now needed to support data standardization. The DoD now requires target systems to use logical data models to represent data requirements. Process and data models must be developed to represent the policies, strategies, and tactics of organizational operation. Under the data reverse engineering framework, development of the models includes identification, refinement, validation, and linking of all business functions, policies, rules, and activities to data elements, model content, and physical evidence. This approach ensures all data structures can be identified and linked to supported processes and minimizes data impact when processes change. Reverse engineering in this framework results in "engineered data" and directly supports DoD data migration and data standardization efforts.
Using the ANSI three-schema architecture paradigm, Figure 4 shows the relationship of the reverse-engineered logical view (logical) to the physical and user (external) views of the target environment. Reverse engineering is the approach selected to derive logical data models from migration systems--even though this approach is generally used to optimize applications code design and streamline system operations. Using this approach the reverse engineering framework also identifies, extracts, and integrates the unique critical requirements contained in nondesignated legacy systems into the appropriate system migration path. As shown in Figure 5, these critical requirements (indicated by scattered dots in the legacy data "environment") will be incorporated into the migration system data requirements as a part of the data migration efforts.
To evolve a designed migration system to a target system, a migration system must be forward engineered using many reverse engineering products as well as additional requirements identified from the business reengineering activities. In addition, the target system must also support standard open systems architectures. Basing the forward engineering on the data models developed from the reverse engineering analysis makes the migration systems more adaptable and easier to maintain. Adopting an open systems approach should reduce maintenance costs and improve system reliability and maintainability.
Figure 6 depicits the various activities completed in the reverse engineering projects reported here. For each functional area, a set of IS were originally designed to satisfy component-specific operational requirements rather than DoD-wide strategic requirements. Reverse engineering processes recovered "as-is" requirements. Business reengineering activities conducted by functional area working groups have been and will continue to define "to-be" global business process and logical data models, (i.e., the operational requirements of "to-be" DoD-wide IS). Selected migration system data assets will be migrated (some enhanced) into these "to-be" systems.
Currently, there is no direct mapping between data elements and organizational business rules, business domain information, system functional requirements, functional dependencies, and organizational; data distribution architectures. "As-is" data elements and their embedded business requirements are ofen in conflict with or insufficient to satisfy the "to-be" business requirements. Reverse engineering the "as-is" requirements of the migration systems is essential to recovering the associated business requirements at the operational, tactical, and strategic levels.
Recovered "as-is" requirements are then compared against the "to-be" business requirements to identify business requirement gaps during forward engineering. In addition to identifying business requirement gaps, technological gaps are also crucial to determining if "as-is" migration systems satisfy operational system performance requirements and maintenance cost constraints. The migration system architectures are evaluated against technical requirements to identify technical requirements gaps. As part of forward engineering, the business requirements gaps, technical requirements gaps, and data element quality are evaluated to determine the "migratability" and "integrability" of migration systems forming the basis of economic justification to forward engineer specific systems.
The reverse engineering program supports the integrated analysis and redesign/development activities required to modernize the selected migration systems. Due to the massive and complex nature of the DoD systems inventory, modernization must be conducted in multiple phases. The project validated and refined the reverse engineering framework using five selected DoD IS in three functional areas: personnel, pay, and health affairs. The overall thrust of the effort was to identify cross-functional "human being"-related data elements within DoD for integration purposes.
As shown in Figure 7, the Defense Civilian Personnel Data System (DCPDS) is in Personnel, the Defense Civilian Payroll System (DCPS) is in Pay, the Marine Corps Total Force System (MCTFS) is cross-functional between Personnel and Pay. The Composite Health Care System (CHCS), Coordinated Care Performance (CCP), and Medical Expense and Performance Reporting System (MEPRS) are all in Health Affairs.
Before starting the project, quantitative measures of system size and complexity were unknown. This was one of the reasons for running this project as a prototype effort before beginning full-scale reengineering efforts. During the initial phase of the project we developed a cost model for reverse engineering data requirements. We also recorded performance metrics and system complexity measures for each system. We used the metrics to continuously tune-up our cost model. The tuned-up cost models can now be used to estimate a full-scale data reengineering effort. The metric-based results become input to economic analyses to determine selection and economic evaluation criteria for future projects in other functional areas.
Project focus was on derivation of normalized, logical data models and standardization of data element names for generic elements (attributes) and prime elements (entities). The logical data models to support standardization of data elements were used to represent organizational business rules, business domain information, system functional requirements, functional dependencies, and system data architectures of the selected system. We also performed a limited cross-functional integration for a selected business domain (calculate pay) relevant to civilian personnel and civilian pay functional areas. In addition to the cross-system integration, specific objectives were:
* requirements analyses, reengineering analysis and design, rapid prototyping and testing of systems and subsystems, and consideration of data migration and integration issues
* an extendible inventory of migration system data assets
* a realistic and extendible approach for reverse engineering the remainder of DoD migration systems
* identification of automated tool requirements for use in other reverse engineering projects
The project also supported the extension of the DoD Enterprise Model that will feed the DoD Data Repository, in a format consistent with DoD Data Standardization procedures (5).
One of the challenges for cross-functional integration for a selected business function (e.g., calculate pay) is to identify what data elements are relevant to integration. We discovered that interface data elements are not the only relevant data elements for an integrated data model for personnel and pay systems. Figure 8 illustrates our approach to isolate the "calculate pay" data elements in order to construct a logical data model for shared information between civilian pay (DCPS) and civilian personnel (DCPDS) systems.
Since the decision of what to pay a person is represented in personnel policy and how to pay is a pay function, we first started with DCPDS to identify 1) the system functions and data elements relevant to paying a person. These data elements were collectively called the "personnel pay-related domain" and included staffing, affirmative employment, job classification, and employee management relations. The second step 2) was to identify the system funtions and relevant data elements for calculating pay within DCPS. The third step 3) was to compare the resultant data elements from the first step 1) and second step 2) to identify shared data elements (i.e., the overlapping elements between DCPDS and DCPS for the "calculate pay" business function). For cross-functional integration among three or more systems, data elements were compared in pair-wise fashion iteratively using the same technique as illustrated in Figure 8 to identify shared data elements. In multiple systems comparisons, more extensive analysis is required because business domains, business rules, functional requirements, and functional dependencies must be cross-compared among systems at each stage to isolate not "shared" but "identical" data elements and structures.
Reverse Engineering Data Requirements Framework
As stated previously, the complexity of the legacy environment has great impact on the feasilibity and success of a reverse engineering project. Management of these complexities is shown in the first four steps of the framework process depicted in Figure 9. Steps 5 through 10 outline the process for discovering data requirements and recovering normalized data models from the collected physical evidence. The recovered logical data model associates business rules, business domain information, system functional requirements, functional dependencies, and organizational datta distribution architectures and data elements.
One of the essential outcomes of reverse engineering is a traceability matrix linking the data model components to the physical evidence supporting their existence. The traceability matrix is critical for validating the correctness of derived logical models from physical evidence. Later the traceability matrix can be used to identify the impacts of process change and to plan data migration from legacy to migration to target systems. Using CASE tools, the logical data models can be used to automatically create table structures for the selected DBMS, while the traceability matrix is used to develop the mapping required to migrate data from the legacy systems to the appropriate tables in the new system.
Based on the characteristics of the systems we encountered, the key aspects of the technical approach included: 1) divide-and-conquer, 2) extraction of business rules from software and data structures, 3) model management, 4) configuration management, and 5) schema integration. Although software was divided into software modules used to implement various functional requirements, business functions were often implemented as part of several different software functions.
As part of the physical implementation, data structures also have multiple roles. Data structures directly extracted from software were often transient data variables used to store processing results from stored data elements. Processing would be an implementation of business rules or low-level mathematical algorithms (e.g., sort). Data structures are also used to hold data presented as online screens or reports and can be used to map to data elements stored in databases or files. Hence, data structures defined in software and data dictionaries, if they exist, may represent only some of the conceptual, external, or physical views of data elements.
The most difficult aspect of reverse engineering is discovering business rules and data entities from software and data structures. Massive quantities of data structures and the associated code force a divide-and-conquer approach to discover data elements and organize them into categories. Our approach is top-down, then bottom-up. During the top-down step we analyzed material relevant to the conceptual view (e.g., user screens, reports, policy statements). This helped to establish draft versions of high-level "as-is" business process and data model frameworks. These were used to quickly identify a set of conceptual buckets for "holding" relevant categories of data structures for a specific domain. We used data buckets (entities) derived from the business process model to partition the business data model into views.
Each data model view corresponds to a business process. The functional dependencies between views are inherited from the relationships between processes in the high-level process model. We also identified the nontransient data structures access (create, read, update, and delete) aspects of processes. We then derived more detailed logical data models and linked these data structures to the derived data entities using the traceability matrix. All data structures and data entities were linked to their associated operational, tactical, and/or strategic requirements. For transient data structures computed from nontransient data elements and later used for updating other nontransient data, we defined structure entities to capture the business rules associating these nontransient data elements.
Reverse engineering analyses must also carefully examine and analyze a system after technology insertion of improvement ideas are surfaced. Step 11 in Figure 9 identifies this opportunity. This step should not be overlooked. When the objective for reverse engineering also includes cross-functional integration, steps 12 through 15 in Figure 10 are used for integrating the data models for the selected systems.
As shown in Figure 11, each version of a reverse-engineered data model is associated with an encyclopedia. Our model management approach defined standardized policies and procedures making it feasible for reverse engineering team members to review one another's work and understand information in other project encyclopedia/directory structures. The encyclopedias store information in three separate dictionaries. The plan dictionary contains planning information; the data dictionary contains the logical data model and associated information; and the design dictionary contains physical structures and related information.
To enhance traceability, physical evidence obtained is linked in an Information Resource Catalog (IRC) database. The IRC contains sources of information relevant to each reverse-engineered system including systems manuals, source code, directives, and interview results, for example. It provides an electronic index for the information resources gathered during the reverse-engineering life cycle. The information resources are physically stored in filing cabinets. The traceability matrix is used to identify and/or trace the correlation of items contained in the various models and document the satisfaction of business requirements and rules. The data models developed using the IE: Advantage CASE tool may be imported to the IRC. The traceability matrices stored in the encyclopedia are also loaded into the IRC. The IRC also permits users to link physical evidence to the data model with an interface that users can use to query the contents of IRC, the data modeling status, and linkage between physical elements, logical entities, and business requirements and rules.
Lessons learned Management Issues
Getting formal commitment and authorization from the system stakeholders (especially administrative and technical management) is a crucial factor to the succcess of reverse engineering projects. One primary problem was associated with lack of access to key system personnel because they were simultaneously required in forward development efforts. In addition, the costs to properly reverse engineer systems and the value of the reverse engineering products are consistently underestimated by management. A common misperception is that we just "run the code through a CASE tool and the tool will produce new systems." It has been challenging to convince management that reverse engineering is a substantially broader and more complex task than just "restructuring the code" (1-3).
Restructuring the code is useful to improve the system maintainability, but it does not facilitate data migration, evolution of a component-based (e.g., Air Force, Army) system to satisfy DoD-wide system requirements, data sharing among functional areas (e.g., health affairs, personnel, pay), and incorporation of new or changed data requirements resulting from business process improvement activities.
The current crop of CASE tools touted as reverse engineering solutions focus on associated plysical data structure and variables to segments of code. This is useful in identifying how physical schema are created, updated, read, and deleted, but it is not sufficient to construct the logical and conceptual external views of data requirements. Even if the functional and technical experts help the reverse engineers clean up the analysis products, the products do not include, for instance, the links to physical evidence provided in the traceability matrix and model management support described here as services offered by the framework. More important, using such tools in isolation may be as damaging as performing inadequate, inaccurate, or incomplete systems or software requirements engineering. Recovering a normalized logical data model together with the associated business rules, policies, and physical data structures is difficult, human-performed analysis even with this systematic reverse engineering framework (including model management and appropriate modeling approach). CASE tools may augment the analysis to provide initial understanding of physical implementation of the system, but CASE tools will not provide "the" solution for recovering the conceptual and logical data requirements.
Currently no single CASE tool is capable of handling this diversified and complex environment. In order to discover and recover data requirements implemented in these systems, major analysis has been performed by humans and supplemented with commercial and custom software in this approach.
Reverse Engineering Products
Throughout the initial phase of this project, we have been continuously asked:
* What reverse engineering products are produced?
* How can these products be used?
* How is reverse engineering related to other system development activities?
* When will the reverse engineering products be ready?
* Why do they need reverse engineering in order to obtain standard data elements?
We constructed a matrix, Table 3, to help answer some of these questions. Logical data models and standard data elements are not the only reverse engineering products. The other outputs of the reverse engineering framework include:
* High-level model view decomposition hierarchies. These may be used to size the system, scope the project, and define the logical data model views.
* Traceability matrix. The matrix, maintained in the IRC (described previously), substantiates the requirements for specific data model components by linking them to specific authorizing statutes, regulations, and policy guidance. The matrix can be used to perform impact analysis when subsequent requirements changes occur.
* Technology insertion recommendations. As long as these will-qualified teams are analyzing legacy IS, it seems entirely appropriate to take note of major areas in which technology insertion recommendations would be useful components in the resulting migration plans. Areas such as advanced database and communications network technologies are typical recommendations.
* Framework, methodology, and tool usage reports. As stated earlier the language, data management (or handling) systems, implementation complexity, and strategic plan for each system are different. The generic framework is followed for reverse engineering all systems, but we identified the detailed approaches for deriving logical code and data models from structured analysis. During each phase we refined and revalidated the reverse engineering framework, methodology, and tool usage for each class of system.
* System and data migration plans. The systems migration plan prescribes the necessary steps to make the existing legacy system compliant with IE concepts based on guidance from existing directives [4, 5]. It is a plan for bridging the gap between the "as-is" legacy IS serving a narrow purpose and the "to-be" integrated departmentwide IS.
* Reusade software requirements. Another byproduct will be reusable software requirements in the form of data models, IRC domain-specific rule sets, and perhaps eventually, the software constructed to support the requirements.
* DoD enterprise data model, DoD standard data elements, integrated DoD enterprise database. These are all goals of the CIM/DAPMO data standardization effort. All outputs from the reverse engineering projects are integrated with these departmentwide efforts.
Data Standardization Misconceptions
A dangerous misperception is that data standardization is simply identifying associated data elements and their naming consistency in each system. When implemented, this causes future unforeseen, disastrous outcomes that are difficult to identify and isolate. To correctly use information from data elements, the business rules, policies, and the functional dependencies among elements must be identified and represented in a data model for each system. Before stnadardization can be achieved, model integration is an essential step to identify and resolve synonyms and homonyms based on the rules, policies and functional dependencies represented in the models. Without the integration step, incorrect information continues to propagate throughout the enterprise.
Lack Of Cost Estimation Approach for Reverse Engineering
Costs of the reengineering efforts are difficult to estimate. One vendor we know estimates a flat $1.00 per line of code in the system. We were unable to even estimate the lines of code in the medical portion of the project because of the unstructured nature of the MUMPS programming language code. One measure of prowess among MUMPS programmers is how complex a program can be written with a single line of code. Perhaps this accounts for the various estimates in the number of lines of MUMPS code in CHCS ranging between 1.3 million to 2.5 million depending on whom you ask.
Implications Relevant to the Extension of the System Life Cycle Process Model
Today's global economy imposes new requirements on legacy systems. Even though existing systems do not meet current needs, failure statistics of new software systems development indicate that new systems may also not meet user requirements. Integration, modernization, restructuring, and/or augmentation of the existing information infrastructure are the current trends to solving enterprise information integration problems. Regardless of the type of development life cycle (e.g., waterfall, spiral, JAD, RAD), the reverse engineering data requirements framework can contribute significantly throughout the life cycle.
Figure 12 depicts how the framework contributes to the traditional ("waterfall") IS development life cycle. To achieve enterprise information integration, a set of activities (such as process engineering, data modeling, reverse engineering) must be in concert with the IS design and development phases.
Finally, we believe that the current focus of reverse engineering CASE tools and CASE tool development efforts are concentrating on code analysis. While it is true that most organizations will have a more homogeneous IS base than the DoD, it is also true that much of the remainder of the federal government will have configurations similar to the DoD and will also be unable to prescribe a single CASE tool solution for reverse engineering. A further criticism of the reverse engineering CASE tools is the focus on code analysis with little or no assistance for processes such as extracting business rules. We anticipate our work will be able to define a rudimentary set of requirements for reverse engineering CASE tools operating in a heterogeneous environment.
The project team from Hughes Information Technology Company played a key role in developing and implementing the reverse engineering framework. The authors thank the following for their dedication: C. Haley, D. Heller, D. Hobson, K. Kawakawi, K. Janes, S. Lem, B. Nguyen, T. Pace, C. Ramiller, C. Szymanski, C. Tholen. In addition, none of these reverse engineering projects could have achieved the real, tangible results they have without the cooperation of the respective DoD functional and technical communities and in particular: Civilian Personnel, Civilian Pay, Health Affairs, and Procurement. We acknowledge their participation in making Phase I of the reverse engineering program a success. Finally, this article also benefited from comments and suggestions made by the Defense Finance and Accounting Data Administration Group.
[1.] Aiken, P. Data Reverse Engineering. McGraw-Hill, New York. To be published.
[2.] Basili, V.R. and Mills, H.D. Understanding and documenting programs. IEEE Trans. Softw. Eng. SE-8, 3 (May 1982), 270-283.
[3.] Chikofski, E. and Cross II, J.H. Reverse engineering and design recovery: A taxonomy. IEEE Softw. 7, 1 (Jan. 1990), 13-17.
[4.] DoD Directive 8020.1-M Functional Process Improvement for Implementing the Information Management Program of the Department of Defense, Aug. 1992 (Draft).
[5.] DoD Directive 8320.1-M Data Standardization, Aug. 1992 (Draft).
[6.] Fairley, R.E. Software Engineering Concepts. McGraw-Hill, New York, 1985.
[7.] Finkelstein, C. Information Engineering: Strategic Systems Development. Addison-Wesley, Reading, Mass., 1993.
[8.] Gause, D.C. and Weinberg, G.M. Exploring Requirements: Quality Before Design. Dorset House, New York, 1989.
[9.] Nolan, R. Managing the crisis in data processing. Harvard Bus. Rev., (1979).
[10.] Sharon, D.A. reverse engineering and reengineering tool classification scheme. Reverse Engineering Newsletter 4 (Jan. 1993).
[11.] Staiti, C. and Pinella, P. DoD's Strassmann: The politics of downsizing. Datamation (Oct. 15, 1992), 107-110.
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||excerpt from paper presented at the May 1993 Association of Computing Machinery/IEEE Computer Society's Working Conference on Reverse Engineering|
|Author:||Aiken, Peter; Muntz, Alice; Richards, Russ|
|Publication:||Communications of the ACM|
|Date:||May 1, 1994|
|Previous Article:||Reverse engineering.|
|Next Article:||An approach for reverse engineering of relational databases.|