Making biology teaching more "graphic".
Since my students come from a wide range of academic backgrounds, I do not assume they all have a firm understanding of graphs. Thus, I introduce using graphs by describing how we can graphically illustrate a simple, linear relationship between two variables. In statistics, a correlation is a measure of how closely two variables relate to one another in a linear fashion (Snedecor & Cochran, 1980). While a correlation addresses only how one variable relates to another and does not imply a cause-and-effect relationship, a regression assumes that a cause-and-effect relationship exists (Gould & Gould, 2002). My proposed graphs are much simpler than those presented in a statistics course, and the goal in using them is not to teach statistical methods but merely to help students think about relationships. Since these graphs include minimal labeling and reveal only general information about a relationship, and to avoid conflicts with guidelines for graphing statistical correlations and regressions, I will refer to these new graphs as "relational graphs." The novelty of relational graphs was noted and supported by colleagues in the Department of Statistical Science at Baylor University (T. L. Bratcher and D. L. Young, Professors of Statistical Science, pers. comm.).
Before the instructor demonstrates how to draw a relational graph, the students may need a quick review of variables, which include anything that has a value that can change, as opposed to a constant, which has a value that does not change. Most students in biology quickly pick up on the idea that everything seems to be related to everything, and that virtually anything is a variable. When relating two variables, the independent variable is thought to have an effect on a second variable, which we call the dependent variable. Stated another way, the value of the dependent variable is thought to depend on, or be influenced by, the value of the independent variable. When drawing the relational graph, we follow standard protocol by placing the independent variable on the x-axis and the dependent variable on the y-axis (Neter et al., 1983).
As an example of graphing a correlation between two related variables, we consider a hypothetical experiment investigating the effect of light intensity on plant growth rate. First, I ask the students to identify the independent and dependent variables. In this case, light intensity is the independent variable because we expect it to have an effect on how fast a plant might grow, assuming that all other growth factors are optimal. The hypothetical experimenter plants 12 "identical" seeds (emphasizing the importance of replication and control in the scientific method) and allows them to germinate. All sprouted plants receive the same treatment except for the amount of light shining on different groups of plants. The groups of plants are as follows: three plants are kept in the dark (as controls), while three grow under low light intensity, three under medium light intensity, and three under bright light. After 1 week, the height of each plant is measured (mm), which we divide by 7 to yield growth data in mm/day. Hypothetical data are presented in Table 1.
Given this data set, some students may assume that the most important aspect of the experiment is to memorize numbers. If so, they have overlooked the overriding concept revealed by the data; that is, brighter light seems to cause faster plant growth. Shortly, the students learn that these data suggest a positive correlation between light intensity and plant growth. The crucial initial step is to label the graph's axes properly. The independent and dependent variables should have the lowest values nearest the intersection of the two axes and the values should increase as we move horizontally along the x-axis and vertically along the y-axis. We then plot the data as points (Figure 1).
The next step is to describe a line, called the "line of best fit," drawn through the data. When analyzing real data, this line is drawn precisely using a sophisticated statistical formula. At this point, the students need not worry about how we derive the line, but they should interpret the line as the best predictor of how the independent variable affects the dependent variable. We first read along the x-axis to find a particular value of interest for the independent variable, and then move vertically from that point to reach the line of best fit. Then, we read horizontally from the line of best fit to the y-axis to see the expected value for the dependent variable (follow the dotted line in Figure 1).
[FIGURE 1 OMITTED]
When interpreting a relational graph, the students must decide whether the correlation between the two variables is positive or negative (inverse). We say that a positive correlation exists when an increase in the value of x results in an increase in the value of y. We can also say that a positive correlation exists when a decrease in the value of x results in a decrease in the value of y. The point is that for a positive correlation, changing the value of x results in a change in the value of y in the same direction. On the basis of our experiment, we conclude that a positive correlation exists between light intensity and growth rate. We can state this relationship in two ways: (1) light intensity has a positive effect on plant growth rate or (2) plant growth rate is positively affected by light intensity. Therefore, we conclude that increased light intensity increases growth rate or that decreased light intensity decreases growth rate. This might be a good time to point out that sometimes graphed data may suggest a very strong correlation when, in reality, the two variables are not closely related. An example is a data set that shows a strong positive correlation between the number of storks in Germany and the number of human births in that country over a given time (Wirth, 2003).
After the students understand a positive correlation, we move on to describe a negative correlation. Although the word "negative" often has a pessimistic connotation, when reading graphs, we say that a negative (or inverse) correlation exists when an increase in the value of x results in a decrease in the value of y. Likewise, a negative correlation exists when a decrease in the value of x results in an increase in the value of y Thus, a negative correlation occurs when the value of the dependent variable moves in a direction opposite to that of the independent variable. As an example, we consider the effect of insulin on blood glucose concentration, where the concentration of glucose is written with brackets: [glucose]. Insulin is a hormone that causes most cells in the body to absorb glucose from the blood and use it as a source of energy. Consequently, when insulin levels in the blood increase, there is a corresponding decrease in blood [glucose]. We show this relationship in Figure 2B and say that insulin has a negative (inverse) effect on the amount of glucose in the blood, or we can say that blood [glucose] is negatively (inversely) correlated to insulin levels.
For depicting general relationships between variables, it is not necessary to plot precise data points on the graph. Instead, a graph with only two labels can effectively convey general information about the relationship between any two variables in question, as shown in Figure 2. Nevertheless, I usually must remind students a few times in the beginning that we assume that the axis values are increasing from left to right on the x-axis and from bottom to top on the y-axis.
[FIGURE 2 OMITTED]
While many students readily comprehend the correlation that a simple relational graph might depict, others struggle with how to label the axes properly to show an accurate relationship. For example, some students who understand that insulin decreases blood [glucose] may draw the relationship on a graph incorrectly by labeling blood [glucose] on the x-axis and insulin on the y-axis, as shown in Figure 3A. Adding to the confusion is the fact that blood [glucose] levels indeed affect how much insulin the pancreas releases, but this correlation is positive (see Figure 3B). The students must learn that the axis labels determine the question being addressed and that correct labeling is thus crucial for illustrating a proper correlation between related variables. For this reason, on the first few graphs, I may include the phrase "in response" as part of the y-axis label to reinforce the idea that a given value for the dependent variable occurs in response to a given value for the independent variable.
After the students learn how to illustrate positive and negative correlations, the next step is to demonstrate how a graph can illustrate a relationship through time. This is important because biology teachers must frequently expound on processes related to homeostasis and its disruption, and these processes are related to mechanisms that involve positive and negative feedback. So, just when the students feel they have a good understanding of positive and negative correlations, they now hear explanations of phenomena labeled with the same descriptors used earlier: positive and negative. Not surprisingly, blank stares and wrinkled brows often accompany this transition in topics. At this point, I introduce the concept of feedback and state that the body regulates blood [glucose] through negative feedback. Our definition of negative feedback is simple: "Reaction negates (counteracts) stress." I prefer to use the word "negate" because of its shared root with "negative." A stress is any movement in the value of a variable away from the optimal value. If the body's reaction to the stress causes the variable's value to move back toward the optimal value, we say that the reaction negated the stress. We graphically illustrate the negative feedback control of blood [glucose] in Figure 4.
Not only can a single graph be useful in showing how one variable might affect another variable, but adding additional graphs to the first one can help the students integrate information and tie concepts together. In these cases, the dependent variable in one graph becomes the independent variable for the next graph. When introducing this concept, I ask the students to draw two separate graphs: one showing the effect of blood [glucose] on insulin secretion, and the other showing the effect of insulin levels on blood [glucose]. We then "stack" these two graphs. Graph 1 shows the positive effect that blood [glucose] has on the amount of insulin secreted in response, and graph 2, using the dependent variable from graph 1 as the new independent variable, shows the amount of insulin secreted having a negative effect on blood [glucose] (see Figure 5).
[FIGURE 3 OMITTED]
After the students become comfortable with stacking two graphs, we add a few more related variables, emphasizing that real-world relationships rarely involve only two variables. For example, I ask the students to predict how the duration of exercise might affect one's blood [glucose]. They conclude that exercising causes a decrease in blood [glucose]. We expound on this idea by saying that working muscles have a greater demand for energy than resting muscles and that glucose is a major source of that energy. Consequently, muscle activity level has a positive effect on the muscle's rate of glucose absorption. Based on this information, we can integrate six graphs using the following variables: (1) muscle activity, (2) muscle energy requirements, (3) rate of glucose absorption by muscles, (4) blood [glucose] in response to glucose absorption, (5) insulin levels in response to blood [glucose], and (6) blood [glucose] in response to insulin levels. The stacked graphs appear in Figure 6.
When studying the integrated information in Figure 6, it is possible to relate the independent variable in the bottom graph directly to the dependent variable in any of the other graphs. For example, we might wonder how the duration of exercise affects the amount of insulin secreted, which relates information in graph 1 to information in graph 5. At first glance, one might think that since these two graphs both show positive correlations, the duration of exercise has a positive effect on the amount of insulin secreted, when in fact the duration of exercise has a negative effect on the amount of insulin secreted. So, how do we arrive at that fact using these graphs? In the same way that a person must use individual steps when climbing to the top of a staircase, we must use information in the lower graphs to make our way to the top graph. I recommend the following approach.
1. Move to the right end of the x-axis on graph 1 (for longer duration of exercise) and then move up to the line of best fit. The resulting point correlates to a large amount of muscle energy required (the dependent variable).
2. Staying with the large value for muscle energy from graph 1, we look to the right on graph 2 (muscle energy now being the independent variable), and then move up to the line of best fit. The resulting point correlates to a high level of glucose uptake by the muscles.
3. Staying with the high level of glucose uptake from graph 2, we look to the right end of graph 3, then move up to the line of best fit. Thus, we see that a high rate of glucose uptake by muscles results in a lower blood [glucose].
4. Staying with the lower blood [glucose] from graph 3, we look to the left end of graph 4 and then move up to the line of best fit, seeing that a lower blood [glucose] causes a decrease in insulin secretion.
[FIGURE 4 OMITTED]
[FIGURE 5 OMITTED]
[FIGURE 6 OMITTED]
Considering all the data presented in the stacked graphs, we summarize as follows: increasing muscle activity increases energy demand in the muscles, which causes the muscles to absorb more glucose from the blood, which causes a decrease in blood [glucose], which finally causes the pancreas to secrete less insulin. The answer to our question is that increasing muscle activity ultimately causes a decrease in insulin secretion.
With practice, the students become more proficient in drawing, stacking, and reading these simple graphs to show multiple relationships. While simple relational graphs have been very effective in showing physiological relationships in my human anatomy and physiology classes, they have also been beneficial when showing relationships in other areas of biology, especially ecology. For example, after covering an aquatic ecology module in an introductory biology class, students can illustrate the positive effect that nutrient runoff into a lake has on phytoplankton production, and then relate the phytoplankton production to zooplankton production and, finally, to fish production in the lake. Considering the many variables discussed in the biology classroom, the possibilities for applying these relational graphs are immense. I believe that the reason why these relational graphs have worked so well in the classroom is that they reinforce in students' minds that everything we discuss in biology really is related to everything else.
Gould, J.L. & Gould, G.F. (2002). Biostats Basic: A Student Handbook. New York, NY: W.H. Freeman.
Neter, J., Wasserman, W. & Kutner, M.H. (1983). Applied Linear Regression Models. Homewood, IL: Richard D. Irwin.
Snedecor, G.W. & Cochran, W.G. (1980). Statistical Methods. Ames, IA: Iowa State University Press.
Wirth, S. (2003). King Kong, storks, and birth rates. Teaching Statistics, 25, 29-32.
MARK F. TAYLOR is Associate Professor of Biology at Baylor University, One Bear Place, Box 97388, Waco, TX 76798; e-mail: email@example.com
Table 1. Hypothetical data for light intensity and plant growth. Plant no. Number of Plant Growth Lights above Rate (mm/day) Plant 1 0 0.2 2 0 0.3 3 0 0.3 4 1 0.8 5 1 1.0 6 1 0.8 7 2 1.5 8 2 1.3 9 2 1.4 10 3 1.8 11 3 1.9 12 3 2.1
|Printer friendly Cite/link Email Feedback|
|Author:||Taylor, Mark F.|
|Publication:||The American Biology Teacher|
|Date:||Nov 1, 2010|
|Previous Article:||A play of protein synthesis in the classroom.|
|Next Article:||About our cover.|