Physics and engineering.
Vicechair: T.M. Parchure, US Army Engineers
8:30 Divisional Poster Session
Chair: Atef Z. Elsherbeni, The University of Mississippi, University, MS 38677
APPLICATION OF THE FAST FOURIER TRANSFORM FOR PERFORMANCE CHARACTERIZATION OF AN IF-DIGITAL CONVERTER
Andy Harrison, Raytheon Electronic Systems, Forest, MS 39074
In this work, the Fast Fourier Transform (FFT) is used to characterize the performance of a TPQ47 radar IF-Digital Converter (IFDC). The IF to digital converter consists of two channels each of which provides down conversion and filtering of the IF frequency, attenuation control for gain leveling and radar gain control, and 12-bit A/D conversion. Features of the FFT-based spectrum include harmonic content, spurious content and noise floor level. These combined effects are reflected in the IFDC's rms signal to noise ratio that can be derived from the FFT magnitude spectrum. Ideally, the frequency spectrum of the output signal would be a single line that represents a pure sine wave input and is free from distortion generated by the circuitry of the IFDC. Since the FFT assumes that the record repeats with a certain period, sharp discontinuities at the points where the start of one record joins the end of the preceding record cause the spectral components to be spread or smeared. The smearing, called leakage, can be reduced by multiplying the data in the record by a windowing function that weights the points in the center of the record heavily while smoothly suppressing the points near the ends. Many different windowing functions were studied that offer various tradeoffs of amplitude resolution versus frequency resolution.
COPLANAR WAVEGUIDE FED BOW-TIE APERTURE ANTENNA
Guiping [Zheng.sup.*], Atef Z. Elsherbeni, and Charles E. Smith, The University of Mississippi, University, MS 38677
The coplanar waveguide (CPW) fed aperture antenna consists of an aperture that is etched from a ground plane having a bow-tie shape and a CPW feed line that protrudes through the aperture. This antenna exhibits characteristics similar to a dipole antenna. A prototype of the CPW aperture antenna is designed at a central frequency around 10 GHz and the computed input impedance is approximately 50 W similar to the characteristic impedance of the feed transmission line. A finite difference time domain (FDTD) simulation is performed using a developed MATLAB program and verification is obtained using the Ansoft High Frequency Structure Simulation (HFSS) program. With the approximately matched CPW aperture antenna, the return loss at the designed frequency is less than 20 dB. The gain and the radiation efficiency are also improved by taking advantage of this load matching property. The FDTD simulation results demonstrate that the CPW aperture antenna behaves more like a dipole antenna rather than a microstrip patch antenna, specially when the width of the aperture is comparable to a dipole length of 1/2. The characteristics of this bow-tie aperture antenna that include small return losses, appropriate resonant frequencies, matched loading, and moderate gain, indicate that it has many features required to build a phased antenna array in a variety of actual applications.
CHARACTERISTICS OF COPLANARBOW-TIE PATCH ANTENNA
P.L. [Chin.sup.*], Atef Z. Elsherbeni, and Charles E. Smith, The University of Mississippi, University , MS 38677
This paper introduces a new concept of coplanar patch antenna (CPA), which consists of a bow-tie shaped patch surrounded by a closely spaced ground conductor and a coplanar waveguide (CPW) feedline. The solid and wire bow-tie antenna configurations have been used in many applications over the past years because of their broadband characteristics. In this case, the characteristics of small aperture between the bow-tie patch and the surrounding ground plane are similar to that of the dual of a wire type bow-tie antenna. Yet it possesses some features characteristic of a patch antenna. To study the structure, the finite difference time domain (FDTD) technique is employed to design and to simulate this type of CPA and bow-tie patch with operation at 10 GHz with 50 [omega] input impedance. Numerical results for return loss, radiation pattern, and gain are presented, and validated using the Ansoft Nigh Frequency Structure Simulation (HFSS) program. Although the patch bow-tie antenna exhibits a somewhat narrow bandw idth for small aperture widths as compared to the solid cone bow-tie antenna, more broadband operation can be obtained with wider aperture widths. In addition, the characteristics of this configuration can be further extended using loading due to the ease in adding impedance-type loads to the aperture/patch and the ground plane. This type of antenna is proposed for use as an element of a phased array antenna specifically designed for transmit/receive (TR) modules for radar systems.
COPLANAR PATCH ANTENNAS WITH ENHANCED BANDWIDTH
Brad N. [Baker.sup.*], AtefZ. Elsherbeni, and Charles E. Smith, University of Mississippi, University, MS 38677
Since the inception of the idea of coplanar patch antennas, their popularity has been growing due to their ease of construction as well as their simple design. One potential problem with coplanar antenna geometries, however, is their relatively limited bandwidth. The coplanar patch antenna, like the microstrip patch antenna from which it was derived, suffers from a narrow bandwidth of approximately 3 percent. In this paper, finite difference time domain (FDTD) analysis was used to parametrically analyze a coplanar patch antenna operating at 10 GHz. The effect of the coplanar slot width on the resonant frequency of the antenna is investigated. Two techniques to broaden the functional bandwidth of the 10 GHz coplanar patch antenna are then investigated. Slots are added to the coplanar patch antenna for the possibility of combining the resonances of the slot and the main patch together to broaden the bandwidth. Parasitic patches were also used experimentally to broaden the bandwidth and to provide a better match to the feeding network. The results of these experiments are analyzed to suggest optimum configurations for wideband coplanar patch antennas.
FINITE ELEMENT ANALYSIS OF THE PRESSURE BEHA VIOR DURING THE PULTRUSION OF COMPOSITES
Tabious [Hayes.sup.*] and Tyrus McCarty, University of Mississippi, University, MS 38677
A common problem associated with the manufacture of composites is the formation of voids in the final product. The voids in the composite adversely affect the strength of the final product. A high pressure rise in the die inlet region can eliminate the voids. The elimination of voids can lead to a better quality final product. The purpose of this research is to determine the effect that various process control parameters have on the pressure rise during the pultrusion process. A numerical approach referred to as the finite element method is employed in this study. Finite element analysis is used to investigate the effect of the process control parameters of pull speed, fiber diameter, and fiber volume fraction on the pressure rise in the pultrusion die region.
COMPUTER SIMULATION OF EARTH/SATELLITE(S) FOR REMOTE SENSING APPLICATIONS
Edward [Woo.sup*], Atef Z. Elsherbeni, and Charles E. Smith, The University of Mississippi, University, MS 38677
A software package for the simulation of earth/satellites relative positions and land coverage for remote sensing applications is developed. This package provides a visualization tool to help in improving the analysis and design of radar systems and the techniques for collecting data for synthetic aperture radar (SAR) systems. This simulation will also allow users to get a better understanding of radar technology, global positioning systems, and basic remote sensing principles. Users will be able to input orbital parameters (shape and position) and satellite parameters (number, position, and speed). The distance between the satellites (while moving around the earth) and earth spot coverage area will be computed and displayed to the user in a window with a 3-dimensional dynamic view of the earth/satellites movements. A better understanding of the earth/satellites dynamic relationship will assist in solving many of the technical problems of today's satellites global coverage systems.
Divisional Talks Begin; Engineering Session Chair: Ahmed A. Kishk, University of Mississippi, University, MS 38677
10:00 INVESTIGATION OF THE RF PERFORMANCE OF A HYBRID ACTIVE ARRAY ANTENNA SUBARRAY MODULE
Andy [Harrison.sup.*] and Rick Rollenhagen, Raytheon Electronic Systems, Forest, MS 39074
In this work, the RF performance of a TPQ47 radar hybrid active array antenna subarray module (SAM) is analyzed. Transitions and discontinuities in the RF path were investigated in both time and frequency domains. The SAM provides RF transmit and receive signal paths from the feed network to free space, transmit and receive beam steering control, receive signal amplification and element gain leveling, as well as array temperature reporting. The SAM consists of two microwave integrated circuits (MIC), two digitally controlled ferrite phase shifters, three microcontrollers, one RF circuit board and associated connectors. All electrical components are mounted on one side of the printed wiring board (PWB) and the RF trace is attached to the opposite side. The RF trace is comprised of six dipole elements. RF transmit and receive functions operate on two, three element in line arrays (3 Pack). In transmit, the SAM receives a single RF input that is split to drive both 3 packs. A ferrite phase shifter provides phase control of the transmit signal for each path. In receive, the SAM provides two RF outputs each fed by a 3 pack. Each receive path is driven by a MIC that provides amplitude and phase control.
10:20 BROADBAND SPATIAL POWER COMBINERS: FULL-WAVE ANALYSIS AND MODELING TECHNIQUES
Milan V. [Lukich.sup.*], Alexander B. Yakovlev, Atef Z. Elsherbeni, and Charles E. Smith, University of Mississippi, University, MS 38677
Spatial power combiners are used for power amplification at microwave and milimeter-wave frequencies from an array of solid-state devices. In contrast to traditional power combining techniques, which utilize waveguide and transmission line junctions, free space spatial power combining enables to achieve increased power output levels and power combining efficiencies. In this paper we present a waveguide-based spatial power combining system for operation at milimeter-wave frequencies. The system consists of several interacting antenna arrays placed at dielectric interfaces of an oversized multilayered waveguide. Uniform amplitude and phase excitation to the antennas is provided by a hard-horn with a dielectric sidewall loading. Signal collected by antenna arrays is coupled to the amplifier array through a ground plane with slots and the amplified signal is then reradiated into free space through slots of another ground plane. Generalized scattering matrix approach is adopted to model the entire amplifier system by decomposing it into smaller modules and cascading the modules using these matrices. A method of moments integral equation formulation is presented for the full-wave analysis of multilayered waveguide with embedded antennas. In order to increase frequency band and efficiency of the system and provide operation in multiple band regimes, resonant U-slot patch, microstrip loop, tapered meander line, triangular slot antennas, and their modifications are used. Numerical results for several representative antennas are given to illustrate advantages of their utilization in a power combining system.
10:40 IMPLEMENTATION OF THE ASYMPTOTIC BOUNDARY CONDITION IN THE FDTD METHOD
Andrew [Simon.sup.*] and Ahmed A. Kishk, University of Mississippi, University, MS 38677
Geometry description in the FDTD method is a somewhat tedious task especially when the geometry contains fine details. If the FDTD code is based on the use of cubic cells, this adds further constraints on the cell size and we may be forced to use excessive number of cells to simulate the required geometry. This will increase the memory requirements and also processing time of the problem. For example, if we have a surface loaded with conducting strips that needs to be modeled in a FDTD code, the description of that will be tedious, as each strip has to be described and paid attention to. As an alternative, we propose the use of the asymptotic boundary condition (ABC) as a way to avoid dealing with such a problem. The ABC is an isotropic averaging type of boundary condition that does not require a detailed description of the strips. This boundary condition can be easily implemented the same way we implement the perfectly conducting surfaces and dielectric materials in the FDTD method. It has to be mentioned th at the ABC is accurate when dealing with conducting strips as long as the number of strips per wavelength is large, typically more than 10 strips per wavelength. However, experimental data has shown that the ABC still holds for certain cases that have as little as 4 strips per wavelength. Another type of structure that can be modeled using the ABC is the corrugated structure where the corrugations can be easily modeled without the need for their detailed description. With the ability to classify the corrugated region as a homogenous region, we enable the division of cubic cells to be employed fully for the problem at hand. The method of implementing these boundary conditions in the FDTD method will be presented together with some applications.
11:00 ANALYSIS OF TRANSMISSION AND REFLECTION LOSSES FROM A CLASS OF COPLANAR WAVEGUIDE STRUCTURES
Abdelnasser A. [Eldek.sup.*], AtefZ. Elsherbeni, and Charles E. Smith, The University of Mississippi, University, MS 38677
In this paper, several geometries of a class of grounded coplanar waveguide (GCPW) are investigated using the finite difference time domain (FDTD) method, and their losses are computed. A uniform GCPW structure is used as a reference case for the other non-uniform geometries. First, this reference case, four geometries are proposed to study the transmission and loss effect of replacing parts of the dielectric substrate with free space. Afterwards, two new geometries are simulated to study the effect of reducing the feed line width in a limited section (step), and the introduction of a gap in the microstrip feeding line, with and without a bridge that connects the two parts of the microstrip feeding line separated with the gap. The effect of adding a perfect electric conductor (PEC) cap above the microstrip feeding line, and connecting the two side ground planes, is also studied. The conductor attenuation, power losses and the input and output impedances are studied for the proposed geometries. The results sho w that adding more free space in the substrate improves transmission, decreases power losses and increases both input and output impedances. It is also found that the relative power losses and conductor attenuation are increasing with frequency, while a PEG cap improves the transmission and adding a PEC bridge over a gapped feed line improves both transmission and return loss coefficients.
11:20 ANALYSIS OF DUAL TAPERED MEANDER SLOT MICROSTRIP ANTENNA
Cuthbert M. [Allen.sup.*], Atef Z. Elsherbeni, and Charles E. Smith, University of Mississippi, University, MS 38677
The objective of this paper is to examine the performance of a dual tapered meander slot microstrip antenna using the FDTD technique. The antenna is to be designed to work at three pre-determined frequencies in the X-band or over the entire X-band region of frequencies. A computer code is developed to automatically create the antenna geometry for any angle, number of turns, width of slot, and spacing between slots. Such flexibility in geometry parameters is essential to easily analyze different configurations of the antenna. In order to have a complete analysis, certain characteristics of the antenna must be studied. Among these characteristics are the return loss, the input impedance, and the far field pattern. There has been no known work published on such an antenna structure. However, there have been numerous papers written on other forms of spiral or meander line antennas. These antennas have been very useful in broadband applications since they are very much frequency independent. Hence this gives motiv ation for the development of the dual tapered meander slot microstrip antenna. In the analysis of this antenna, the return loss is first computed and comparison made using an independent solution. The antenna is shown to work over a very large portion of the entire X-band.
1:00 ANALYSIS OF MODIFIED MICROSTRIP LOOP ANTENNAS
Matthew J. [Inman.sup.*], Atef Z. Elsherbeni, and Charles E. Smith, The University of Mississippi, University, MS 38677
The characteristics of ordinary loop antennas are well known and documented in the literature. However, this paper intends to explore methods to reduce the physical size of rectangular printed loop antennas by introducing a ground plane into the structure as well as to study its effects on the antenna parameters. With the introduction of the ground plane into the antenna structure, the relationship between the dimensions of the modified loop will also be explored to achieve the design goals. This type of antenna is analyzed using the finite difference time domain (FDTD) technique and is then verified by other numerical simulation packages. Differences in the radiation characteristics obtained from the full (unmodified) loop antennas and the modified antennas are examined. Optimization of bandwidth, gain, directivity, and operational bands are also investigated. By manipulating the dimensions of the modified antenna it is possible to adjust its parameters to achieve maximal operation at a specific frequency ba nd, or in some cases at several different bands concurrently. Reducing the physical size of the antenna allows for more commercial uses in mobile transceiving platforms. Design examples of this type of antennas for radar applications and personal communication devices are presented.
1:20 BANDWIDTH ENCHANCEMENT OF THE DIELECTRIC RESONATOR ANTENNA BY ADDITION OF MAGNETIC MATERIALS
Swee H. [Ong.sup.*] and Ahmed A. Kishk, University of Mississippi, University, MS 38677
The dielectric resonator antenna has been widely investigated in recent years for its high efficiency and mechanical flexibility. Here, the monopole antenna is loaded with a multi layer dielectric material acting as a dielectric resonator. The dielectric loading has two significant effects: first, it reduces the size of the antenna and second, it improves the antenna matching bandwidth significantly. Last year, results of this antenna with dielectric loading were presented. This year, homogeneous magnetic materials with small permeability constants are added to the dielectric materials. It is believed that these new materials can be obtained by adding the magnetic materials with the dielectrics, in a powder form, with a specific ratio, to achieve the required permittivity and permeability. It is observed that the radiation patterns of the monopole were not affected by the loading. Careful selection of the resonator material can result in wider bandwidths. Numerical results presenting the effects of different permittivity and permeability on an antenna for a cellular communication system will be shown. Results will show that a significant increase in the bandwidth, reaching 40%, can be obtained. This is achieved by the combination of different materials, including the use of homogeneous magnetic materials, which lowers the resonant frequency as compared with dielectric loading.
1:40 DESIGN, CONSTRUCTION, AND VERIFICATION OF AN AUTOMATED MOVEABLE INDOOR ANECHOIC CHAMBER FOR ANTENNA MEASUREMENTS
Brian T. [McDaniel.sup.*], Ahmed A. Kishk, and Charles E. Smith, The University of Mississippi, University, MS 38677
This paper presents the design and construction of a moveable anechoic chamber for use in RF, wireless, and microwave education and research applications. The instrumentation used for anechoic chamber measurements is a system consisting of an Agilent/HP 8530 Microwave Receiver, 8530B Sweep Generator, 8714B S-Parameter Test Set, and a computer-controlled rotator for antenna pattern measurements, designed at The University of Mississippi. This computer-controlled system allows for single frequency measurements as well as swept frequency techniques. The operation of the antenna rotator is automated, using a stepper motor, with user selected rotation angles and signal sampling intervals. Furthermore, the measured antenna pattern and antenna position data is acquired using a PC, providing real-time pattern display of raw or processed data and simple data storage. The design of the anechoic chamber is presented which includes necessary parts, setup, calibration methods (for single or swept frequency measurements) a nd the construction is described. The verification of the useable quite zone for measurement is studied, and a detailed presentation of two anechoic chamber figure-of-merit methods (Antenna Pattern Comparison and VSWR) is presented, along with examples of collected antenna pattern measurements from the University of Mississippi Indoor Anechoic Chamber. Using these techniques, The University of Mississippi has implemented an efficient way to measure, evaluate and characterize experimental antennas and scattering systems in an anechoic chamber whose electromagnetic properties are known.
2:00 HYDRAULIC SCALE MODELS
T.M. Parchure, US Army Engineer Research and Development Center, Vicksburg, MS 39180
Hydraulic scale models have been extensively and successfully used over the past century for solving a variety of engineering problems for which analytical solutions were not available. The objective of scale model study is to physically simulate the relevant natural conditions and to operate the model for predicting the effect of proposed changes. The theory of modeling is based on the similitude criteria. Construction and operation of scale models requires expertise and experience. Two types of models are used: a. geometrically similar, in which the horizontal and vertical scales are the same, and b. vertically exaggerated, in which the two scales are different. Once the two scales are carefully selected, the other scales are fixed by mathematical relationships. Scale models may be two-dimensional or three-dimensional. Almost every hydraulic model needs field data for verification. After a model is verified for a certain set of parameters, it is assumed that it will also behave consistently for a new set of values. Hence they are used as predictive tools. Hydraulic models offer not only qualitative but also quantitative answers to a large number of problems. Numerous examples of past projects reveal that large projects constructed without adequate prediction of their effect have proved to be not only expensive mistakes but have also caused irreversible ecological damage in some cases. Such mistakes can be easily avoided through advance modeling, in which several alternatives and options can be investigated at a fraction of the cost of project. In conclusion, hydraulic models offer a valuable tool in achieving success and economy for engineering projects. Theory of scale modeling and practical examples of problems will be presented.
2:20 THE ACCURACY OF KEYSTROKE INTERVAL MEASUREMENTS IN A WINDOWS ENVIRONMENT
Marcus [Parks.sup.*] and Mark Tew, University of Mississippi, University, MS 38677
Psychological experiments were once held in isolated and controlled environments, but now thousands of researchers in fields such as cognitive psychology are conducting experiments over the web. Timing accuracy is of perennial concern to researchers conducting reaction time studies, especially for those using computers that are not dedicated to attaining millisecond accuracy. The following equipment was used during the research: Desktop computer, lap-top computer, keystroke generator, National Instruments DAQPad, and key interface box. Two programs were also used during this research "Key Interval" and "Digital Waveform." This research was held under the following four environments: Default, Experiment Only, Experiment with web traffic, and Experiment with time killer. These four conditions were tested through an executable file and over the web through a web browser. The results from the executable file were compared to the results of the web delivered file. Next, the web browser's results were compared betw een Internet Explorer and Netscape. Finally, the program was executed through an E-Prime player where these results were compared with the previous results that were executed through an Authorware plug-in. A final conclusion could not be drawn at this point and time in the research. The research in which I participated is an ongoing experiment and much more data will be needed. In conclusion, Authorware and E-Prime were found to be capable of acceptable accuracy in measuring keystroke intervals. This held true in the default environment, experiment only environment, and with light web traffic in the background.
2:40 Divisional Business Meeting
Physics and Engineering Session
Chair: T.M. Parchure, US Army ERDC, Vicksburg, MS 39180
8:30 EXPLANATION OF THE BIRTH OF THE UNIVERSE
Amin Haque, Alcorn State University, Alcorn State, MS 39096
The four important observations, namely, expansion of the Universe, cosmic background microwave radiation, formation of light nuclides, and formation of galaxies and large-scale structure, prove that the universe has a starting point. According to the general theory of relativity, the universe is expanding because space-time itself expands. Edwin Hubble's observations show a linear relationship between the distance to galaxies and their recessional velocities (redshifts), and the age of the universe is estimated about 1.5 x [10.sup.10] years. The 3 K primordial microwave radiation, predicted by George Gamow using quantum physics, has been discovered and accurately measured recently. The lumps detected in the cosmic microwave radiation, confirm that the primordial lumpiness caused due to slight temperature fluctuations at the very early age, has been carried over into modern times. The galaxies were formed due to fluctuations in matter density. The particle physicists speculate that intensive hot dark matter, consisting of high energy particles like neutrinos, tends to form very large structures. Cold dark matter seems to form smaller structures such as galaxies and clusters of galaxies. Inflation could have increased to large gravitational fields powerful enough to form galaxies and clusters. Using nuclear and sub-nuclear physics, the make up of the universe is: protons, neutrons and electrons 5%, dark matter 30%,"dark energy" 65%. We have reason to believe that physics, cosmology, astronomy, and technology will take us 1.5 x [10.sup.10] years back in time to see the birth of our universe.
8:50 UNDERSTANDING THE FATE OF THE UNIVERSE
Amin Haque, Alcorn State University, Alcorn State, Lorman, MS 39096
According to the inflation theory, and general theory of relativity, the Universe is flat, and expanding with a decelerating rate. In 1998, two international teams of astronomers working independently announced that the distant supernovae are dimmer than they should be in a decelerating universe, indicating that the expansion of the Universe is accelerating. It is believed that the universe is filled with "antigravity," a concept originally proposed by Einstein. This force or "dark energy" dominates over gravity, and causes the universe not only to expand, but also accelerate the expansion. The equations of quantum physics independently suggest that the apparently vacuum space in the universe is filled with a form of energy that would act just like Einstein's antigravity. The dark energy is a function of space, and as the distance between galaxies increases more and more dark energy is created filling the space, becoming stronger than the gravity. The recent detection of the primordial lumpiness in the cosmic background radiation confirms that dark energy is real. As the universe will continue to have accelerated expansion, stars will die out, and the universe will become cold and black. Astronomers and physicists caution that the discoveries about dark matter, dark energy, accelerated expansion, and flatness of space-time must be confirmed before accepting them without reservations. We might expect surprises in the future. The cosmological constant, could even change direction to reinforce the gravity. Therefore, a closed universe, and hence a Big Crunch, is possible.
9:10 UNDERSTANDING THE CREATION OF MATTER
Amin Haque, Alcorn State University, Alcorn State, MS 39096
According to the Standard Model, the behavior of the fundamental particles, quarks and leptons--the building blocks of matter--and their interactions can be described through the strong, weak and electromagnetic forces. The basic forces are transmitted between the quarks and leptons by a third family of fundamental particles, called gauge bosons. Each force is carried by a different type of gauge boson. Photons carry the electromagnetic force; gluons carry the strong force; charged particles, and neutral particles, carry the weak force. (A particle called the graviton, is believed to carry force of gravity.) The quarks, leptons and gauge bosons acquire their masses through the interaction with a member of another new family of fundamental particles, known as Higgs bosons, and it is the strength of this interaction that gives the particles their masses. The experimental results from the large Electron Positron Collider and the predictions of the Standard Model agree only if there are three types of neutrinos. As the energy of electron-positron is expected to be doubled, the experiment will be able to study in detail the charged Higgs bosons W- and W+. In the Large Hadron Collider, the proton energies will reach ten times greater, and will provide an opportunity to have a glimpse of the early Universe. The interactions of the hadrons will produce the Higgs bosons or it will explain whatever mechanism Nature employs to create matter.
9:30 I FELT LIKE GILBRETH!
David [Loflin.sup.*] and S. Kant Vajpayee, The University of Southern Mississippi, Hattiesburg, MS 39406
Frank and Lillian Gilbreth made significant contributions to-industrial engineering in its infancy. It is interesting to know how Frank applied what can only be called common sense to improve a brick layer's productivity threefold. Even today, the practice of industrial engineering--unlike the other disciplines such as civil engineering, mechanical engineering, or electrical engineering--draws a lot from common sense. Or, shall we say uncommon sense for if it were common sense it could not be engineering. Recently, the first author went through a realization probably similar to that of Frank Gilbreth, while working part-time as a student at a local golf course. What he did as part of his job was simply applying common sense. Later when he began his studies at The University of Southern Mississippi as an industrial engineering technology major, he began to interact with the second author. Slowly he began to learn what industrial engineering is all about. The presentation will describe his feelings of Eureka wh en he realized that the common-sense approach to the task of distributing golf carts had the seeds of industrial engineering. We wonder what Gilbreth, the industrial engineer, might have felt later in his life about the spark generated by his now famous, common-sense-based improvements in the brick laying process. In the present case, the golf carting was completed efficiently, thoroughly, error-free, and in time. And these are the measures for evaluating industrial engineering projects.
9:50 THE ISO 9000 BEGAN IT ALL!
S. Kant Vajpayee, The University of Southern Mississippi, Hattiesburg, MS 39406
The world-wide quality revolution began in the 1970s with the culmination of ISO 9000. Following this, and in response to a growing concern for the environment, the ISO developed ISO 14000--a family of environment management standards--in 1996. The structure and the concept behind these two standards are the same; both aim at improvement--the former of the product and service quality, and the latter of the environment. Several sectors of the industry have developed their own standards based on ISO 9000 structures and concepts to suit their specific needs. The Big Three US auto makers established QS 9000 in 1994 as requirements for their 13,000 first-tier suppliers. As a supplement, they also developed TE 9000 for their 50,000 suppliers of production equipment and tooling. The aerospace industry, led by Boeing, has adopted a similar standard called AS 9000. Another important industry impacted by these global standards is telecommunications. Its TL 9000 serves the same purpose as the QS 9000 for the auto indust ry. Other issues in the "pipeline" to undergo a similar impact are occupational health and safety, handling of customer complaints, worker welfare, and personal finance planning. Thus, ISO 9000 has proven to be "revolutionary." The US accepted ISO 9000 rather hesitantly, as obvious from an ex-Ford chiefs statement: "I told people we can either adopt the ISO standard and build on it, or we would spend the rest of our lives telling people why we didn't. We figured it just made sense." But as the wisdom set in, we moved fast and are today one of its strong proponents.
10:30 COMPUTER SIMULATION OF THE THERMAL MANAGEMENT
Nichalos L. [Jeffries.sup.*] and Tyrus McCarty, University of Mississippi, University, MS 38677
The trend in packaging electronic systems has been to reduce the size of devices by placing more functions in smaller packages to increase their performance. This has contributed to higher heat densities, requiring that thermal management be given a high priority in the design cycle in order to maintain system performance and reliability. In this study the analysis involved the thermal management of an electronically packaged system associated with a radar subsystem. A computer simulation of the heat transfer in the radar subsystem was performed to test the effectiveness of several specified design conditions. A computational numerical technique was employed to solve the basic equations that governed the physical processes occurring in the radar subsystem. This study focused on the utilization of heat pipe/heat sink technology for cooling the electronic components associated with this subsystem. The results of this study are vital because they provide several effective ways of insuring that the electronic com ponents are operated in a failure-safe environment.
10:50 ACOUSTICAL INVESTIGATION OF SOIL LAYERS
Wheeler B. [Howard.sup.*], Craig J. Hickey, and James M. Sabatier, University of Mississippi, University, MS 38677
In soils containing a fragipan horizon, the rate of soil erosion and crop yield are directly related to the depth and condition of the fragipan layer. The objective of this study is to develop a technique for delineation of the fragipan horizon through use of acoustic to seismic (a/s) coupling. This method utilizes the seismic energy coupled into soil by an impinging airborne acoustic wave to determine the layer depth and velocities in the soil. A site in North Mississippi with a known fragipan horizon was chosen as the test site for our experiment. Two locations with physical depths to the fragipan of about 30 cm and 1 m were selected. Conventional techniques including the shallow refraction survey and Rayleigh wave survey were utilized to determine the "seismic" depth to the fragipan and the seismic speeds of the soil layers. The data obtained from the a/s coupling survey are compared to the conventional seismic and soil cores to establish the accuracy of the acoustic survey.
11:10 COHERENT BACKSCATTERING FROM COMPLEX LIQUIDS
Letemeskel [Asfaw.sup.*] and Joe Whitehead, University of Southern Mississippi, Hattiesburg, MS 39406
An experimental study of coherent backscattering of light from aqueous suspension of micron sized polystyrene spheres and a bulk sample of nematic liquid crystal (NLC) is presented. Both these materials enhance multiple scattering, which is a universal phenomenon occurring in almost every branch of physics. Coherent backscattering is caused by the constructive interference of two waves traveling in opposite directions. A sharp peak in the intensity in contrast with the background appears within a narrow angular cone in the opposite direction of the incident beam. This peak is called the coherent backscattering cone. The angular width at half maximum of the enhanced backscattering peak depends on the mean free path and the wavelength of the laser light used in the experiment. We found the enhancement ratio for the polystyrene samples very close 2.0 but for NLC it is 1.67, a discrepancy is due to the anisotropic nature of nematic liquid crystals.
11:30 DEBOND1NG CHARACTERISTICS IN BONDED CONCRETE OVERLAYS
Hak-Chul [Shin.sup.*] and David A. Lange, Jackson State University, Jackson, MS 39217, and University of Illinois at Urbana-Champaign, IL
To rehabilitate old concrete pavements for a longterm performance with only minor repairing, bonded concrete overlays are considered as a cost-effective strategy. The superior performance of bonded concrete overlays is sometimes damaged by early age surface cracking and/or debonding at the interface between old and new concrete. This early age failure is mainly due to volume changes of the overlay concrete by shrinkage and thermal changes. To understand the early age behavior of bonded concrete overlays, an extensive experimental measurements and numerical analysis were carried out. Laboratory overlay specimens were fabricated to measure opening displacement at the interface. Debonding profiles at the interface were determined by a dye technique. A finite element model was developed to compare debonding behavior due to volume changes. From the experimental measurements and numerical analysis, it was found that bonded concrete overlays with HPC mixtures have strong tendency to debonding at the early age. The m ain reason for this tendency is due to high shrinkage gradient in the HPC mixtures and low bond strength at the interface.
11:50 ARE INDUSTRIAL ENGINEERS REAL ENGINEERS IN THE CLASSICAL SENSE?
S. Kant Vajpayee, The University of Southern Mississippi, Hattiesburg, MS 39406
Unlike others such as civil engineers, mechanical engineers, or electrical engineers, industrial engineers are less well known as to what they do. There are two primary reasons for this. First, they are relatively a new breed. Second, they are not real engineers in the classical sense of the term. They are half engineers (professionals dealing with engineering and technical matters) and half managers (professionals focusing on optimally utilizing the resources, including people). Industrial engineers are unique as an interface between technical professionals and the management. The former have limited knowledge of what managers do, and the latter is busy mostly with non-engineering/non-technical functions. If industrial engineers are fortunate, both managers and the other engineers of the company "love" them. On the other hand, where they are not appreciated for what they do, both may attempt to "kick" them around. The adjective industrial truly describes them, because they are found in all types of industrie s--not just manufacturing--from health care to wealth care, from doughnut franchises to defense contractors, from shop floors to space programs. Known earlier as productivity engineers, and then as system engineers, now they are called integrators. They have eagle eyes for the forest rather than the trees. Other engineers, on the other hand, are skilled at focusing on the trees. As technology gets more sophisticated and globalization continues to make industries more complex, industrial engineers are finding themselves needed more.