Printer Friendly

Mathematics, Computer Science and Statistics.

Co-Chair: Constance Bland, Mississippi Valley State University

Co-Chair: Jason Hale, University of Mississippi

Vice-chair: Joseph Kolibal, University of Southern Mississippi

THURSDAY MORNING

Dogwood

O9.01

9:00 IMPLEMENTING VARIOUS CHECKLIST IN ELECTRONIC FORM USING XML AND PYTHON PROGRAMMING LANGUAGES

Marcus Winn

Mississippi Valley State University

The Electronic Logbook (E-Log) is an information storage and retrieval system that records daily activities of the particle detectors located in the DO Experimental Hall. The detector records collision of protons and antiprotons at Fermi National Accelerated Laboratory. E-Log was developed using XML, HTML and the Python programming languages. The problem with E-Log was that data had to be retrieved from an external database manually and keyed into a form for the E-Log database. My project was to create scripts that would automatically retrieve data from the external database and web pages and insert the data into the E-Log database. This process eliminated the tedious task of the crewmembers which allowed them to perform their job at a more efficient rate.

O9.02

9:30 IMPLEMENTING E-MAIL ALERTS TO THE GCC TEMPERATURE MONITORING PROGRAM

DeMarcus Thomas

Mississippi Valley State University

This document outlines efforts to determine an efficient means of monitoring a high density computing center remotely and efficiently through the use of a web browser. To accomplish this, a web-based temperature monitoring program was created, and implemented at Fermi National Accelerator Laboratory's Grid Computing Center (GCC). The Python program created was designed to read data from thermocouple heat sensors and to superimpose their values onto a floormap of GCC. The software created a virtual floormap which placed the temperature readings in the proper locations relative to the sensors in the actual room. Additions to the program included the creation of a temperature threshold. When the temperature threshold was exceeded, (temperature >95.0 oF), e-mail is sent to the appropriate personnel with a mailing list.

O9.03

10:00 EMWASTE DATABASE TRANSFER AND FUNCTIONALITY ASSESSMENT

Alvin McClerkin

Mississippi Valley State Univeristy

I conducted an assessment of the data transfer from the now inactive Waste Inventory Tracking System (WITS) to the active, operable eM Waste[R] System. The assessment involved two separate verification phases. The first assessment verification phase entailed a 100% review of the WITS inventory utilized for data port to eM Waste[R] This involved a one to one comparison of containers on the WITS inventory list against the operable eM Waste[R] production database. The second assessment verification phase involved random selection of containers within the eM Waste[R] inventory system for field comparison that included storage location, container size, type and weight. Waste containers were also randomly selected at field locations and cross-referenced against the eM Waste[R] inventory database. In addition, a small subset of the containers selected in the second assessment verification were also cross-referenced against the Facility Acceptance Testing-Container Analysis Tool (FATCAT) system to ensure that critical radiological parameters were accurately and adequately tracked to ensure waste management storage facility categorization levels were maintained.

O9.04

10:30 A DEM CONVERSION TOOL

Sufian Ahmad (1), Pamela Lawhead (1)

(1) University of Mississippi, (2) Institude of Advanced Eduacation for Geospacial Sciences

This is a program that converts Digital Elevation Models[DEM] to a 3D polygon file of type PLY. The research done opens possibilities towards creating Binary Spacial Partitioning[BSP] file and using files from Mississippi Auto Resources Information System[MARIS] of type E00.

O9.05

11:00 RESEARCH ON HYBRID ARIMA AND SUPPORT VECTOR MACHINE MODEL IN FORECASTING ENVIRONMENTAL SUSTAINABILITY

Benjamin Harvey, Joshua Fu

Mississippi Valley State University

Due to the complexity of environmental sustainability and the historical forecast data and the randomness of a lot of uncertain influence factors, the observed historical data showed linear and nonlinear characteristics. Currently, the autoregressive integrated moving average (ARIMA) is one of the popular linear models in time series forecasting, and the SVM, which is closely related to the neutral networks model, is the model recent researchers have successfully used in solving nonlinear regression and time series problems. This research presents a hybrid methodology that combines both ARIMA and SVM model to take advantage of the unique strength of ARIMA and SVM models in linear and nonlinear modeling to help forecast environmental sustainability. The ARIMA model is used to deal with the linear pattern of the sustainability and the SVM model is used address the nonlinear association. The effectiveness of the model has been tested using nonlinear patterns in several research studies to forecast environmental pollutants which play a major role in forecasting environmental sustainability. The test results of the models vary according to the changes in each of the variables. The experimental results showed that the hybrid model can effectively improve the forecasting of environmental sustainability, which was accurately achieved by either of the models used separately.

O9.06

11:30 CREATING AN INTER-PLATFORM COMMUNICATION BETWEEN LEGO'S RCX AND NXT AND THE IR TOWER FOR A NETWORKING SYSTEM.

Aik Min Choong, Pamela Lawhead

University of Mississippi

This paper will present the results of a study that explored multi-platform communication; between LEGO Mindstorm's RCX, NXT and a computer. The primary purpose of the study was to develop a method to allow the two microcontrollers (RCX and NXT) to communicate with each other and a single controlling computer. The RCX can communicate with other RCX controllers and with a computer using infrared technologies, while the NXT communicates with other NXT controllers and a computer using either Bluetooth or USB. As we can already see, the two microcontrollers do not have a common communication device and there is a need to choose either of the two communication methods and develop a device for that technology for the controller. As an emerging technology, there are complications with Bluetooth. For this reason, infrared was chosen as the communication method for this study. Our goal was to equip the NXT with an infrared transceiver, and provide the necessary API (Java source code) for the NXT, so it can communicate with the RCX and the computer using infrared. This will allow for a networked environment using all three platforms. Once infrared was available for the NXT, a universal remote control was also programmed to control both the RCX and NXT. This allowed the sending of commands via the remote control both to the RCX and NXT.

THURSDAY AFTERNOON

Dogwood

O9.07

1:30 CHARACTERIZING ELECTROMAGNETIC DATA SPECTRA USING TRANSFORMS AND WAVELETS

Andrew Harrell

Geotechnical Laboratory, Engineer Research and Development Center

This talk will describe an application of mathematical wavelet theory for target detection by analyzing geophysical underground electromagnetic induction data. It uses wavelet analysis for the spatial series analysis of a sampling instrument where the sample rate/target scale is changing Analysis of the data is considered by using spectrograms (short-distance Fast Fourier Transforms (FFTs) in preference to periodograms and continuous wavelet transforms (CWTs) if possible. Both the inphase and quadrature data of the instrument are considered. Tables of percent of spectral power in the data are computed using several different approaches to thresholding the first n Fourier coefficients. This is proposed as a measure of effectiveness which compares the wavelet and Fourier transform approaches to analyzing the data.

O9.08

2:00 A SOFTWARE TOOL DEVELOPED FOR PATTERN RECOGNITION AND DATA MINING APPLICATIONS

Abdullah Faruque

Southern Polytechnic State University

This paper describes the development and implementation of PRT (Pattern Recognition Tool), a software tool for various pattern recognition and data mining applications. This software tool can be used by researchers to select the best pattern recognition techniques for a particular data set. This tool can also be used by educators in teaching pattern recognition and neural networks courses. The development of this software tool takes advantage of the high performance computational and visualization routines of the MATLAB programming environment. PRT provides an integrated environment for various data analysis, data visualization and pattern recognition techniques. Data analysis component of PRT includes: principal component analysis, fisher and variance weight calculations and feature selection. Data visualization tool permits visual assessment of the data patterns and their relationships and allows the user to identify outliers in the data set. Several classification methods have been implemented in PRT using both neural network and statistical pattern recognition techniques. Neural network methods include the Back Propagation Neural network (BPN) and Radial Basis Function (RBF) neural network. Statistical pattern recognition component of PRT includes Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Regularized Discriminant Analysis (RDA), Soft Independent Modeling of Class Analogy (SIMCA) and Discriminant Analysis with Shrunken Covariance (DASCO). PRT provides two ways to estimate the relative performance of different pattern recognition techniques applied to a particular data set by calculating the classification error rate. PRT software tool has been used successfully to classify the different classes of remote sensing and gas chromatographic spectral data.

O9.09

2:30 A COMPLTE DATA ANALYSIS USING MINITAB STATISTICAL SOFTWARE

Elgenaid Hamadain

School of Health Related Professions, University of Mississippi Medical Center

Minitab was originally developed in 1972 at Pennsylvania State University. It is now very commonly used in colleges worldwide. It is an easy-to-use package, used interactively to provide organization, manipulation, analysis, and reporting of statistical data. Minitab accepts data in many file formats and keeps work tidy in a single organized project file. This presentation begins with illustrating how to import data from Excel or SPSS to Minitab. Then, an explanation of how to use Minitab for power and sample Size analysis to calculate the sample size needed to achieve a given power value will be displayed. Use of power curves in MINITAB to visualize the relationship between power and sample size will be shown. Additionally, an explanation will be made to show how to use Minitab to obtain random samples, generate random data, and calculate probabilities for a variety of distributions. Finally, the use of Minitab to generate descriptive statistics, run a two independent samples t-test, perform paired t-test, run one-way ANOVA, perform factorial experiments, and conduct regression analysis will be elucidated.

6:00 Dodgen Reception and Poster Session (Please set up between 4:00 and 4:30p) Location: Grand Ballroom

P9.01

Parallel Compression on the Cell Broadband Engine with CZIP

David Thigpen, Andy Anderson

University of Mississippi

The goal of the study was to parallelize the ZLIB compression library for the cell broadband engine. The basis for the parallel algorithm came from PIGZ, which was a parallel version of the compression algorithm written by Mark Adler for use on a Symmetric Multiprocessor. Using Adler's code as a basis, Andy Anderson and myself were able to successfully make a parallel version of ZLIB, which we called CZIP, that would compress files in GZIP format for the Cell Broadband Engine. CZIP still stands as the only parallel compression tool built solely for the Cell Broadband Engine. During the course of the work on CZIP, we were able to find out the smallest size that would be able to be parallelized using the cell's SPE's without seeing performance that was worse than before parallelizing the algorithm. We were also able to compare the speedup on the cell to the serial versions across all of the SDK's that have come out for the Cell, measuring any gains received from optimizations on the compilers for the Cell architecture.

All of this work was not without its problems, which were many and varied. These problems included errors in documentation and problems with the compiler among other things. These will be discussed in detail at the meeting, giving the solutions that we found to get around these troubling problems and advice on how to avoid them on future projects.

P9.02

A MULTI-RESOLUTION MODEL FOR RENDERING TERRAIN DATA USING JAVA AND OPENGL

J. Sam Testa

University of Mississippi

The goal of this study is to implement a multi-resolution model for rendering terrain data using Java and the OpenGL Java extension JOGL. The terrain data consists of an elevation map and an optional texture map. The terrain data is compiled into a 3-dimensional mesh and then a multi-resolution technique is applied to the mesh. The result is rendered on-screen using Java and tries to provide high rendering rates regardless of the complexity of the mesh.

P9.03

PERFORMANCE ANALYSIS OF OPENMP PROGRAMMING

Liang Huang, Dawn Wilkins

University Of Mississippi

The OpenMP (Open Multi-Processing) is an application programming interface (API) which supports shared memory multiprocessing programming. It can also run on multi-platform including Unix and Windows by usnig C, C++ or Fortran. In this article, we analyzed the benefits of the OpenMP as a parallel programming language such as, incremental parallelism, the message passing elimination and simpler programming and compiling. Furthermore, we considered a series of programming factors which influence the performance of OpenMP programs, such as the number of fork/join step, threads synchronization, the choice between the private variable and shared variable, the consideration of the cache hit rate etc. Extensive tests of various programs using OpenMP were conducted to see how and in what degree the factors influence the performance of parallel programs. Finally, a general OpenMP programming style was suggested to build more efficient, high quality OpenMP program.

P9.04

ASSESSMENT OF A NON-LINEAR OPTIMIZATION ALGORITHM FOR IMAGERY CLASSIFICATION

Henrique Momm, Greg Easson

The University of Mississippi

The necessity for information extraction from remotely sensed data has experienced significant growth in recent decades. Automated and semi-automated techniques have being developed as an alternative to minimize human interaction and thus save time and cost. However, custom information extraction techniques rely on one's ability to design procedures from scratch either by using tools available in commercial software packages or by developing new computer programs. In both ways there is a steep learning curve involved. Evolutionary computation provides an alternative solution to reduce the overhead necessary to develop tailored solutions because they can solve (or partially solve) the problem without being explicit programmed to do so but rather to derive the solution from experience. Genetic programming was integrated with standard unsupervised clustering algorithms to develop custom-tailored solution in the form of non-linear functions to spectrally separate the target spectral feature from the remaining image background. By using a Monte Carlo simulation procedure, two distinct aspects of the proposed framework were investigated: [1]--uncertainty involved in the random creation of the initial set of candidate solutions from which the algorithm begins and [2]--its sensitivity to the number of variables. Results suggested that the overall variability increased with the introduction of additional variables despite the higher accuracy values. Conversely, the use of techniques such as population restarting significantly reduced the variability caused by the initial randomness process and therefore it is recommended to be incorporated into the framework.

P9.05

AN EXPERIMENT WITH CONSTANT MIGRATION OF VIRTUAL MACHINES

Anthony Nocentino, Paul Ruth

University of Mississippi

This study explores the feasibility of constantly migrating a virtual machine between nodes. A constantly migrating virtual machine is one that is almost always ready to be committed to the destination node, allowing the administrator to commit the VM almost instantaneously. This technique shows great promise for high availability systems allowing administrators to provision redundant systems without special hardware. We have modified the virtual machine migration process, conducted experiments and produced results regarding the performance of this solution. Our current results indicate that we need to further explore the source code of the migration and make modifications that allow control of the resources dedicated to the migration.

P9.06

GAME PROGRAMING UNDER MICROSOFT XNA GAME STUDIO EXPRESS

Jianqing Sun, Lawhead Pamela

Department of Computer and Information Science, University of Mississippi

XNA Game Studio Express is an integrated development environment (IDE) developed by Microsoft to make hobbyist, independent game developer and professional companies to develop various games. It is based on .NET framework and provides a number of class libraries, such as some about 2D & 3D space, time control, first-person shooters and matrix transformation, to design game program. With this platform, Microsoft Visual C# 2005 and XBOX360 are used as programming language and hardware environment separately. A 2D plane game is designed and 2D BMP gray terrain images in 3D world are displayed on TV. Firstly, in the 2D game programming, I designed elements with different classes to stand for objects in game, and controlled their distant behaviors and response to the key press of the player on the game time line successfully. Then In the 3D terrain display, basic terrain algorithm was used to create triangle view with height data value. In order to give the various projections, I used the program to turn the camera position with some matrix transformation. Users can feel being in a three-dimensional world. As a result, XNA is a good game programming tool.

P9.07

A NEW METRIC FOR DISTRIBUTED ALGORITHMS: ACCUMULATED FAILURE LOCALITY

Sungbum Hong

Jackson State Unviersity

One of major roles of distributed algorithms is to reduce the impact of crash. If an algorithm can make the area impacted smaller than that of others, then it could maintain robust distributed systems. To examine the robustness of distributed systems, Failure Locality that is a distance between a crashed process and a farthest process starved has been being used as an important metric. A graph G(V, E) can represent a distributed system, where V is a set of vertexes and E is a set of edges. Each vertex v represents a process and also a edge $ e$ represents a communication channel between two processes. A graph G'(V', E') where V' is a set of vertexes crashed or starved and E' is a set of edges associated with the vertexes. Therefore, G' \subseteq G. Some of the processes in G can be crashed within a distance d but some of them within the distance d may be still working. Failure Locality can not describe how many processes are starved or working on the distance. To describe the details of Failure Locality within a distance, we proposed a new metric, Accumulate Failure Locality (AFL). AFL can show that how many of the processes on a distance are alive or crashed. With other metrics, AFL was used to evaluate three recent distributed algorithms for the dynamic channel allocation problem. The theoretical analysis was verified through the extensive simulation processes.

P9.08

Mathematica as a Supplement for Teaching Beginning Calculus at Mississippi Valley State University

Raymond Williams, Lee Redmond, Sandra Course

Mississippi Valley State University

Beginning calculus at MVSU has a very high attrition rate and is considered a gatekeeper's course for all STEM majors. Mathematica is a primer computer algebra system (CAS) that is in wide spread use in reformation of mathematics and engineering courses aimed to reduce the high failure rates of students in these departments. The CAS has a variety of capabilities ranging from simple to complex calculations and from graphing elementary functions to displaying regions in 3-space. The purpose of this presentation is to show how MVSU is using Mathematica to supplement lectures in Cal. I and Cal. II The software is used at MVSU mainly for showing complex limits involving indeterminate forms and isotopes and to show relationships and patterns especially with functions and their associated derivatives. The software is also used to verify complex integrals especially those involving transcendental functions. Students expressed better understanding of concepts when graphics are used as opposed to just plain text. Statistical test of student improvement is yet to be determined.

FRIDAY MORNING

Dogwood

8:00-9:00 Poster Session: The Math/CIS/Stats posters from the Thursday evening poster session will be displayed again this morning in the Grand Ballroom

Oral Presentations

O9.10

9:00 AN ARRURATE IMAGE MOSAICING APPROACH USING AFFINE INVARIANT CORNERS

Qiang He, Benjamin Harvey

Mississippi Valley State University

The image mosaicing is to construct a single panoramic image by merging a collection of overlapping images from different viewpoints through a series of coordinate transformations. The image mosaicing provides an efficient way to video detection and analysis. The image mosaicing process consists of two major steps: (1) Image registration to transform images into one coordinate system and (2) Seam removal to eliminate seams in the image mosaic. In reality, accurate image registration on a subpixel grid is the single most critically important step for the accuracy of image mosaicing. The traditional image registration is based on classic features such as the Harris corner and the scaled-invariant feature transform (SIFT) corner, which are both weak to affine transformations. In our research, we introduce affine invariant features to improve subpixel image registration, which considerably reduces the number of mismatched points and hence makes traditional image registration more efficient and more accurate. Affine invariant interest points include those corners that are invariant to affine transformations, including scale, rotation, and translation. They are extracted from the second moment matrix through the integration and differentiation covariance matrices. Our tests are based on two sets of real video. The experimental results from two set of real data show that affine invariant interest points are more robust to perspective distortion and present more accurate matching than traditional Harris/SIFT corners. In addition, for the same image mosaicing problem, we can use much fewer affine invariant points than Harris/SIFT corners to obtain good mosaicing results.

O9.11

9:30 IMPLEMENTATION OF CHINESE SLIDING BLOCK PUZZLE USING BALANCED BINARY SEARCH TREE

Xiaofei Nan

University of Mississippi

Chinese Sliding Block Puzzle is a traditional game, and has long fascinated mathematician aficionados. It is similar to 8-tile puzzle except the number of blocks and different shapes of blocks. A Chinese Sliding Block Puzzle project was designed and can solve all the opening states of this game. It gives the simplest solution if an answer exists or tells of the nonexistence of a solution. First, an effective indexing method is designed to present the blocks and further the states of the board. Second, move generation is made by shifting the blank block instead of finding out the potential movable blocks. Then, the Breath-First searching strategy is used to traverse the whole game tree in consideration of the limited computational complexity of this game. To eliminate repetition nodes, an AVL tree, a balanced binary tree, is introduced. Use of AVL tree shrinks the time complexity from O(n) to O(log n) for lookup and insertion operations. In addition, a fully operational User Interface is built by creating a graphic user interface on Win32 and could implement user's commands, such as selecting opening states, changing background, and dynamical demonstrating the searched results. This implementation is a good example in Artificial Intelligence.

O9.12

10:00 Toward Automatic Parallelization of Spatial Computation for Computing Clusters

Baoqiang Yan, Philip Rhodes

University of Mississippi

In recent years, cluster computing resources have become freely available to a wide variety of scientific researchers. However, scientists are not necessarily skilled in writing efficient parallel code. The processing of spatial datasets is one area in which this problem is particularly acute. To address this issue, we are developing an API that helps the scientific user to easily write code that performs I/O efficiently and either performs efficient intercompute node communication, or avoids it entirely. In previous work we devised a method of distributing data amongst compute nodes that takes into account the manner in which the data is stored on disk, and also aggregates cluster I/O. This work was done in the context of ray casting, but the view direction was constrained to one of the major axes. This poster or presentation will describe how we have extended our previous work to allow arbitrary rotation of the view direction around a major axis, meaning that spatial dependencies are present along two axes for a given view direction, making the problem much harder.

O9.13

10:30 SIMULATION-BASED ANALYSIS OF THE BANDWIDTH USAGE AND NUMBER OF HOPS PER RECEIVER FOR MOBILE AD HOC NETWORK MULTICAST ROUTING PROTOCOLS

Natarajan Meghanathan, Kasi Mudunuri

Jackson State University

The objective of this work is to explore the tradeoff between the efficiency of bandwidth usage and the number of hops per receiver for multicast trees in mobile ad hoc networks. In this context, we implement and compare the performance of the bandwidth-efficient multicast routing protocol (BEMRP) and the Multicast ad hoc on-demand distance vector (MAODV) routing protocol through extensive simulations. The simulations were conducted for diverse conditions of network density (50 and 100 nodes in a 760m x 760m network), multicast group size (2 to 24), node mobility (5 to 50 m/s) and offered traffic load (512 bytes per packet and 4 packets/ second). The performance metrics studied include the hop count per path, the number of links in the multicast tree, the number of multicast tree transitions, the energy consumed per data packet and the energy consumed per node. Simulation results indicate that the average number of hops per receiver and the energy consumed per packet incurred with MAODV is less than that incurred with BEMRP. On the other hand, the average number of edges per tree and the energy consumed per node is less for BEMRP when compared with that of MAODV. We also observed that BEMRP trees, which have fewer edges, are relatively more stable than MAODV trees, which have more edges. All of these above results indicate that efficiency in bandwidth usage and number of hops per receiver in a multicast tree cannot be optimized simultaneously.

11:00 Presentation of awards for Best Student and Faculty Oral Presentations, Best Posters, and best HPC presentation

11:30-12:30 All Math/CS/Stats division participants are welcome to attend the following open session:

O9.14

MISSISSIPPI CENTER FOR SUPERCOMPUTING RESEARCH (MCSR) USER ADVISORY GROUP MEETING, POSTER SESSION, AND SPECIAL SUBSESSION ON SUPERCOMPUTING

David Roach

University of Mississippi

The Mississippi Center for Supercomputing Research was established in 1987 by the Mississippi Legislature and the Institutions of Higher Learning (IHL) in order to provide high performance supercomputing (HPC) support for research and instruction at all state universities. The Mississippi Supercomputer User Advisory Committee (MSUAG) was established by the IHL Research Consortium to provide user input and advice to MCSR management and technical staff on policies and procedures for the Center's operations. It includes member representatives from all IHL institutions. The Advisory Group will meet at this MAS conference. Mr. David G. Roach, Director of the MCSR, will conduct the meeting. The agenda includes an update on MCSR HPC facilities and services, introduction of new MCSR staff members, and site reports and ongoing research updates by MSUAG representatives. A poster session, showcasing research projects that utilize MCSR facilities and services, will follow the Advisory Group Meeting.

A Special HPC Subsession of the Mathematics, Computer Science, and Statistics Division, sponsored by the MCSR, will also be held to serve as a forum on supercomputing in which faculty and graduate student researchers will have the opportunity to describe their research projects that involve HPC, Internet2, Grid Computing, Visualization, Network Security, Computer Systems Administration, and the use of MCSR resources. IHL faculty and graduate students, with an interest in HPC and/or MCSR facilities and services, are also invited to attend and participate.
COPYRIGHT 2008 Mississippi Academy of Sciences
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2008 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Publication:Journal of the Mississippi Academy of Sciences
Geographic Code:1U6MS
Date:Jan 1, 2008
Words:4500
Previous Article:Marine and atmospheric sciences.
Next Article:Physics and engineering.
Topics:


Related Articles
Mathematics, Computer Science and Statistics.
Development of a measure on statistics anxiety in graduate-level psychology students.
Mathematics, Computer Science and Statistics. (Divisional Reports).
Victorian Certificate of Education: mathematics, science and gender.
Mathematics training for service-learning.
Attitudes toward mathematics inventory redux.
Computer use and mathematical literacy: an analysis of existing and potential relationships.
Providing female role models in mathematics and computer science: Lynda Wiest and Shanna Johnson explore resources and strategies for providing...
Clark prof hit by train called brilliant, versatile.
What's a computer for? Part II: computer science is the new basic skill.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters