Printer Friendly

Optimizing iron quality through artificial intelligence.

Through 'inductive learning' of cooling curves, foundries can make alloy behavior predictions and further extend their quality assurance efforts.

While various alloying elements may alter the properties of cast iron, it is accomplished primarily through the crystallization of dissolved carbon into graphite. The alloys are complex, however, and several of the mechanisms behind austenite growth and precipitation of graphite are only partially understood. The manufacturing process, therefore, isn't fully predictable.

The practical foundryman experiences this daily in the form of casting defects and low yields. As described in this article, however, a new method can predict the behavior of an alloy and optimize the casting process.

The Problem

Graphite is the essential component in cast iron alloys such as gray iron, compacted graphite iron and ductile iron.

The gradual change of the graphite shape from flakes in gray iron to a wormlike shape in compacted graphite iron reduces the notch effect inside the iron. This results in increased strength and elongation, but also a reduction in thermal conductivity.

In ductile iron, where the carbon (c) is precipitated as spheres, the effect is more dramatic. Controlling the graphite shape is essential, as it not only influences physical properties, but also casting properties and the risk for defects such as shrinkage, chill, etc.

It has been found that traditional chemical analysis isn't enough as a means for process control. Chemistry just tells us what elements are present in the alloys and their quantity. Most foundries have an efficient control over chemistry yet still experience variations in their process - metallurgically induced scrap is often around 30% of total scrap. The variations that many foundries have taken for granted not only mean unnecessary scrap but also that a high safety margin must be used in gating and risering systems, resulting in low yield.

Another consequence is variations in physical properties and a risk of "hidden" defects that might discourage engineers and product developers to use cast iron as a construction material. The reason for the variations is that the mechanisms behind solidification are only partially understood and because chemistry doesn't provide enough information to analyze and predict the process. Without the possibility to measure essential variables and obtain data to analyze, it's impossible to understand what happens - and even more impossible to control it.

For controlling gray iron, an established method is to make a wedge test and measure the chill depth. This is an informative test that goes beyond chemistry. However, it tells only part of the story: the tendency for chill, or in other words, the difference between the lowest eutectic temperature and the "white" eutectic temperature.

Traditional Thermal Analysis

Another method that captures what happens during solidification is thermal analysis. By casting a standard sample and recording temperature vs. time, solidification information is gained by the mechanism of specific- and latent-heat released during solidification. When the liquidus temperature is reached and austenite is precipitated, heat of fusion starts evolving and increases further when the eutectic temperature is reached. The evolution of heat can be detected on the cooling curve as a change of its downhill slope. Thus, the cooling is a good source of information, provided that the information can be interpreted and understood.

The most common use of thermal analysis, however, hasn't fully explored the information from cooling curves. Thermal analysis is normally used to determine carbon equivalent (CE), C and silicon (Si), or in other words, to act as a replacement for chemistry. To receive stable readings, especially of solidus, the iron is forced to solidify according to the metastable system where C is precipitated as cementite by adding tellurium to the sample. Thus, the essential information, namely about precipitation of C into graphite, isn't available and only the liquidus and solidus temperatures are recorded.

Typical formulas created by correlating chemical analysis with liquidus and solidus are:

CE = 14.05 - 0.0089 * liquidus

C = -6.51 = 0.0084 * liquidus + 0.0175 * solidus

Si = 78.411 - 4.28 * P - 0.0683 * solidus

One should, however, be aware that the variation in dissolved oxygen, which can't be measured by chemical analysis, influences the C activity. The effect can be differences in the liquidus temperature of up to 50F (10C) for the same chemistry. In reality, the important parameter is the actual liquidus temperature, not the chemistry. Behavior determines the properties, not the composition.

Several researchers have used a "topdown" approach and statistical methods in analyzing cooling curves. One approach is the calorimetric method as described by Wlodaver. The cooling curve is transformed into its first derivative where changes in the cooling curve are more easily detected. The first part of the curve when the sample is still liquid is used to calculate a formula for a "zero-transformation-curve" (a hypothetical curve if no latent heat of fusion was released.)

By comparing the actual cooling curve with the "zero curve," it is possible to get a quantitative measure of the amount of the precipitated phases. The specific heat for iron at temperatures around 2192F (1200C) is about 0.84J/g C. Heat of fusion for austenite is about 193 J/g and about 3658 J/g for graphite. As can be seen, the method is sensitive to variations in precipitation, especially of graphite.

One practical problem is to get a correct formula for the "zero-transformation-curve." It is influenced by the pouring temperature and the number of observations before the liquidus arrest temperature occurs. It is also influenced by the nucleation status of the melt as some solidification occurs at the walls of the test cup already before the liquidus arrest temperature is visible. Therefore, unreliable "zero-transformation- curves" might be a problem. The main problem is to interpret the information and to make predictions about the behavior for the alloy.

A New Method

To verify and optimize melting and treatment processes, especially for gray and ductile iron, a system was developed based on thermal analysis combined with artificial intelligence methods. Known as ATAS (adaptive thermal analysis software), its purpose is to analyze samples solidifying according to the stable system and make predictions about the risk for various casting defects, as well as estimating physical properties. The hardware consists of an industrial computer, an A/D converter and a twin stand for test samples.

In the development, research planning methods were used to cover the search space (all possible combinations) with as few tests as possible. Several multivariate tests were made where both chemistry and charge sequence and the time and temperature in the melting furnace were changed. Real castings were cast at the same time as the test cups (modulus 0.75 cm; same as normal test bars) and the results were recorded.

Liquid Arrest Temperature: It isn't Liquidus

During development, the complexity of interpreting cooling curves was obvious. The first arrest temperature on the cooling curve for a hypoeutectic composition is normally referred to as liquidus. However, a solidification simulation of the test cup revealed that when the arrest temperature was reached, some of the metal had already solidified at the walls of the cup and the metal in the middle around the thermocouple was still fully liquid.

The explanation is that what is observed on the cooling curve is the balance between heat losses from the cup and heat released from the sample. Thus, the first arrest temperature reveals that at this point, the total heat released per time unit (specific heat plus latent heat) is equal to the heat losses through radiation from the top surface and through conduction and convection from the walls of the test cup. The same reasoning can be applied to the other arrest points in the cooling curve, the low and high eutectic temperature. This means that the interpretation of a cooling curve isn't as straightforward as one might think.

Artificial Intelligence and Rule Induction

Rule induction is the artificial intelligence method used for knowledge acquisition in interpreting the curves. A database of cooling curves examples and their associated results is used. The automated rule induction process uses information theory to produce general statements or rules from the examples.

The derived rules are in a symbolic description, semantically and structurally similar to those a human expert might produce after observing the same examples. The aim of induction is to discover a set of rules that reveals relations between the variables consisting of cooling curve attributes. The rules are presented as graphical decision trees. The induction method allowed a "bottom-up" approach to be used to develop rules capable of interpreting the information from the cooling curves and to make predictions. A partial decision tree created by the rule induction software appears in Fig. 1, in this case for predicting the risk of micro/shrinkage in ductile iron.

The risks are classified as high, some and none. A total of 96 samples were evaluated and entered as examples into the database. A rule induction tool known as the "analyzer" created the rules.

Of the 10 attributes (variables) available, induced tree only uses four, namely GRF_TWO (graphite factor 2), liquidus GRF_ONE and FD_TS (first derivative at solidus). Seven rules were produced. In order to avoid shrinkage, rule number 1 should be used. Note that the rule induction system automatically selected GRF_ONE, GRF_TWO and FD_TS as major attributes. All of these attributes are related to eutectic graphite and graphite shape, which are important factors for controlling the microshrinkage that occurs at the late stage of solidification.

The rule induction method is a powerful method for research and development and has been used to create the knowledge base for the rule-based expert system. The knowledge base can predict the probability in these types of scrap:

* macroshrinkage;

* microshrinkage and porosity;

* chill and inverse chill;

* slag and gasblow defects.

The system can also predict nodule count in ductile iron. Provided that the sample is allowed to cool below the eutectoid transformation point [[less than]1292F (700C)], the system can also predict pearlite and Brinell hardness.

Traditional multiple regression analysis was also tried but gave high standard deviations and low correlations, since several of the relations in the data were valid under certain conditions and the data contained interrelated rules between the variables that weren't known in advance. The problem is too complex to be described with one model or equation. The rule induction has the advantage of separating populations of data and constructing a set of rules for each such population.

Adaptive Learning

A cooling curve can be considered an alloy's fingerprint. If an identical cooling curve were to appear as on a previous occasion, the alloy will behave in the same way as it did earlier. From that perspective, cooling curve analysis can be looked upon as a pattern recognition task. However, the behavior might differ from foundry to foundry depending on type of materials and methods used. Therefore, it is necessary for some mechanism allowing the system to learn and adapt itself.

Two possibilities are available in this new method. One is to adjust the maximum and minimum limits for the thermal parameters for the different alloys. These limits are stored in a database for alloys and are used in the condition part of the rule-based expert system. By gradually adjusting the limits, the system will improve its ability to recognize conditions that identify risks for casting defects.

The other method is called case-based reasoning. Known cases or examples of cooling curves with a known outcome are stored in a database. When a new test is made, the cooling curve is compared with the stored cases in the database. A similarity index is calculated for each case as well as for the different outcomes. The case that shows the highest similarity with the current cooling curve is selected and its associated outcome is presented as the likely outcome or prediction. This method makes it easy for a foundry to gradually accumulate its experience and to use it online.

The result of a typical analysis and prediction is presented in Fig. 2.

If the sample fills all the criteria, the "OK" message appears. The message from the rule-based expert system is displayed in the mid-part of the screen. At the lower part, is the result of the case-based system. Here, the actual test showed a similarity of 100% with case 9. Therefore, the foundry can expect the same outcome from this alloy. As seen from this example, the system works like a pattern recognition system.

Optimal Cooling Curves

The optimal cooling curve for an alloy depends on the casting (due to its configuration) and various types of mold materials (due to mold stability, heat transfer, etc.). Therefore, the optimal values for an alloy must be correlated to the practice and requirements in each foundry. This allows the foundry to gradually fine-tune the limits in the alloy database and to use the case-based learning method to recognize both optimum conditions, as well as the situations when casting defects can occur. However, some general guidelines can be stated. As an example, an optimal cooling curve for unalloyed gray iron is shown in Fig. 3.

The upper diagram shows the basic cooling curve. The lower diagram displays the first derivative of the curve. The horizontal line represents the balance between the released heat and heat losses, a point on that line is thus equal to zero solidification rate (0C/sec). One point above that line indicates that the released heat is higher than the heat losses and vice versa for points below the horizontal line.

For hypoeutectic alloys, liquidus (TL) should have a well-defined plateau indicating dendritic growth. Start of eutectic freezing (TES) shouldn't be too deep and should occur between TL and the low eutectic temperature (TElow). TElow should be at least 59F (15C) above the roetastable eutectic temperature (TEWhite). The maximum recales-cence rate (TEM) shouldn't be too high and the recalescence between 36-41F (2-5C). Graphite factor 2 (GF2) represents graphite shape and heat transfer and should be maximized to 25. The depth of the first derivative at solidus (FDTS) should be less than -3.

Optimizing Melting/Treatment

The system measures about 20 different parameters relevant to the behavior of the cooling curve. The information from a cooling curve can be used to optimize various steps in the process. A zero-defects approach should be used, meaning that one should try to find the optimal method in every step of the process and thereby reduce variations. By testing various charge materials, you can find the combination that gives the best results on the cooling curve. Charge sequence as well as time at temperature during melting can be optimized. Excessively long holding times above the boiling temperature reduce oxygen to low levels that can be fatal for nucleation. Test additions of ferrosilicon vs. silicon carbide, etc.

Type and amount of inoculant can be optimized, for example, by studying the effect on the recalescence and the high eutectic temperature. Once the methods have been defined, they should be added in the quality control procedures. The effect is much less variation in properties, which manifests itself in less scrap due to metallurgical reasons, and a possibility to reduce safety margins in gating and risering.

An Alloy Profile

Chemical analysis is limited because it doesn't reflect what is essential - namely the solidification behavior of the alloy.

In a test, cooling curves were taken from a gray iron foundry with tight chemical control. The samples were taken at random intervals and analyzed by spectrometer and by thermal analysis. Chemistry was perfect during the day with extremely small variations. However, the thermal data showed large variations. And so did the casting process, where unexpected scrap occurred intermittently. Thus, chemistry alone isn't enough to verify an alloy.

A successful way of verifying an alloy is to make a test using artificial intelligence and compare the results with specified limits. This can be visualized as an alloy profile diagram as shown in Fig. 4.

The limits in the diagram are collected from the alloy database, which is regularly updated. Using an alloy profile to specify an alloy for a casting provides enhanced reliability. In the future, it is likely that engineers and casting buyers will specify both chemistry and essential thermal properties.
COPYRIGHT 1996 American Foundry Society, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1996, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Sillen, Rudolf V.
Publication:Modern Casting
Date:Nov 1, 1996
Previous Article:The cost-value relationship of metalcasting technology.
Next Article:Another approach to iron casting: the permanent mold process.

Related Articles
New technologies emerge in medical AI.
Foundry technology in the 1990s; immense changes in metalcasting technology will mark the 1990s, offering both foundries and their suppliers...
Beijing meeting stresses international cooperation.
Knowledge-based systems move toward more flexible production.
Replacing humans with machines: the insurance industry has begun to leverage artificial intelligence to cut costs and improve efficiency. But...
Software developer builds on artificial intelligence.
Artificial Intelligence for Computer Games.
Sessions: current as of 2/20/05.

Terms of use | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters