Printer Friendly

Earnings and ratings at Google answers.

I. INTRODUCTION

Most research on Internet transactions considers online markets as forums to facilitate the sale of physical goods (Ellison and Fisher Ellison 2009; Goolsbee and Chevalier 2003). Sometimes the Internet is analyzed as a market for matching workers and firms for an ordinary real-world employment relationship (Kuhn 2004). But certain Web sites can also supplement or even replace in-person employment relationships. This is the approach at Google Answers, a web-based service that facilitates paid matches between "answerers" (who have answers or research skills) and "askers" (who offer payment for answers to their respective questions).

I analyze all questions and answers from the inception of the Google

Answers service through November 2003, and I find notable trends in answerer behavior: More experienced answerers provide answers with the characteristics askers most value, receiving higher ratings as a result. Answerer earnings increases in experience, consistent with learning on the job. Answerers who focus on particular question categories provide answers of higher quality but earn lower pay per hour (perhaps reflecting a lack of versatility). Answers provided during the business day receive higher payments per hour (a compensating differential for working when outside options are most attractive), but more experienced answerers tend to forego these opportunities.

II. LITERATURE AND CONTEXT

Google Answers is distinctive in that its entire service and entire employment relationship occur online. (In contrast, most online sales processes anticipate offline functions such as packing and sending merchandise.) In some respects, this online-only environment eases analysis: A researcher may see the answerer's entire service, letting the researcher fully assess quality. (In contrast, an eBay seller's quality generally is unobservable or only partially observer to a researcher.) Nonetheless, Google Answers does not provide all data researchers might seek. For example, although Google Answers receives answerers' resumes and geographic locations as part of the application process, Google Answers does not share this data with the general public. Chen, Ho, and Kim (2008) address these constraints through field experiments that pay Google Answers answerers to address questions posed by the authors, allowing measurement of the effects of varying prices and gratuities parameters on answerers' responses.

Beyond Chen, Ho, and Kim, others have also examined Google Answers. Rafaeli, Raban, and Ravid (2005) present summary statistics of 2002-2004 answers. Regner (2005) finds that social preferences and reputation influence askers' choice to provide optional gratuities to answerers. Adamic et al. (2008) study answer quality and reputation at the competing (though unpaid) service Yahoo! Answers.

III. METHODOLOGY AND DATA SET

All data for my analysis comes from the Google Answers Web site, http://answers.google.com, as it stood in November 2003. I wrote software to extract questions, answers, and profiles from the Answers site, forming a database of more than 40,000 questions and answers. With only a few exceptions, (1) I observe all Google Answers questions and answers posted through November 2003. (2)

For each question asked, I observe the question itself (text and title), the question's categorization within Google Answers' taxonomy, the time at which the question was asked, the payment amount offered by asker to answerer, and the asker's username. For answered questions, I observe the time at which the question was answered, the answer (including length in characters, and number of included URLs), and the answerer's username. When the asker rated the answer, I observe the rating; when the asker offered a gratuity to the answerer, I observe the amount of the gratuity.

Google Answers allows an answerer to "lock" a question--obtaining the temporary exclusive fight to answer it for the following 4-8 hours (depending on question price). However, I do not observe the time when an answerer locked a question. (3)

Occasionally an asker is dissatisfied with an answer and requests a refund from Google. If Google staff deem an answer unacceptably poor under Google Answers rules, the payment to answerer may be reversed. I do not observe the disposition of refunded questions, but I do observe the total number of refunded answers submitted by each answerer.

Google Answers receives two kinds of payments for its efforts in facilitating matches between askers and answerers. First, Google Answers receives a $0.50 listing fee for each question, whether answered or not. Second, Google Answers receives a 25% commission of answer prices for answered questions. However, Google Answers takes no commission on gratuities.

Google Answers questions may range in price from $2 to $200.

IV. SUMMARY STATISTICS

Tables 1 and 2 and Figures 1 through 6 offer selected summary statistics to give a general sense of this unstudied market. More than 78% of questions have value of $20 or less, but there are notable clumps of questions at the focal points of $50, $100, $150, and $200. Answerer earnings include a few outliers, including one answerer who earned some $17,000 from Google Answers for providing more than 900 answers. Answers tend to be provided quickly, with half of answered questions answered within 3 hours. Ratings are clustered at high values, with ratings below 4 assigned to less than 3% of rated answers.

V. WHAT DO ASKERS VALUE?

Available data offer two distinct measurements of answer quality as perceived by askers. First, some askers chose to rate the answers they receive, providing numeric assessments of subjective answer quality (values of 1 to 5, with half-points permitted). Second, some askers offer gratuities to their answerers--additional payments in no way required by Google Answers rules, for which askers receive no direct benefit. (4)

[FIGURE 1 OMITTED]

[FIGURE 2 OMITTED]

[FIGURE 3 OMITTED]

In modeling what characteristics askers value in answers, I use three objective measures of answer characteristics likely of interest: answer length in characters, number of URL references in answer, and time in minutes between asking a question and receiving an answer. (5)

Regressions of rating and of gratuity on length and/or URL count yield insight as to askers' preferences. In multiple regression specifications, (6) answer length has a statistically significant positive coefficient when predicting rating. This finding suggests that whatever weight askers might place on brevity, the value of a concise answer is not sufficient to dominate the shortfalls of incomplete answers. Although the relationship between length on ratings is statistically significantly different from zero, the effect is small: An additional 1,000 characters of response increases the likelihood of receiving a rating of 4 or higher by 0.08% (Table 3, column (4)). This small effect may result from lack of granularity in ratings; per Table 2, 97% of answers receive ratings 4 or higher. But a longer response does not yield a statistically significant increase in the likelihood of a rating 4.5 or higher (Table 3, column (7)). The effect of length on rating is largest, but still modest, as to answers achieving the rating of 5 (just 1.4% of answers); an additional 1,000 characters of response increases the likelihood of a 5 by 0.10% (Table 3, column (8)). As to gratuities, the effect of answer length is more pronounced. On average, an additional 1,000 characters of answer length is associated with approximately $0.12 of additional gratuity.

[FIGURE 4 OMITTED]

[FIGURE 5 OMITTED]

The inclusion of URL references also garners a positive response. In regressions of answer rating, the number of URL references is insignificant when answer length is also included as a regressor (Table 3, column (6)). But this result seems to reflect the high correlation between answer length and URL references--not surprising because many long answers earn their length via extended quotes from referenced URLs. When answer length is excluded from a regression predicting ratings, URL reference count takes a significant positive coefficient, although here too the economic significance is limited: One additional reference yields a 0.03% increase in the likelihood of a rating of 4 or higher (Table 3, column (5)). As to gratuities, URL references play a larger role: One additional reference is associated with a $0.015 increase in gratuity (Table 3, column (11)).

The first-order effect of time between asking a question and receiving an answer is negative: an asker who waited longer for an answer is less likely to assign a top rating. Table 4 columns (4) and (5) indicate that this effect occurs primarily through the withholding of 5's for answers with longer delays. See also discussion in Section VI, relating answerer earnings to effort in minutes.

Regressions use ordinary least squares (OLS), except where otherwise indicated and except that regressions of rating use ordered probit. Throughout, one asterisk denotes significance at the 5% level, and two asterisks denote significance at 1%. All probit regressions report marginal effects.

[FIGURE 6 OMITTED]

VI. EXPERIENCE AND LEARNING ON THE JOB

The labor market literature suggests that on-the-job learning plays a significant role in developing worker skills and facilitating worker productivity (Jovanovic and Nyarko 1996). Significant on-the-job learning is also present at Google Answers.

Answerers' experience on the job is easily measured: Answerer experience is the number of questions previously answered by each answerer, which I call "contemporary answerer experience" or just experience. (In contrast, I use the term "ultimate experience" to refer to the number of questions an answerer answered by the end of the data set.) Google Answers does not directly report an answerer's prior experience: the site provides no mechanism to search an answerer's prior answers. Instead, I form a variable for contemporary experience by indexing all answers I extracted from Google Answers. In particular, I tabulate prior answers to determine how many questions an answerer had already answered, prior to answering each question at issue, and I call this value the answerer's contemporary experience.

Direct subjective measures of answerer quality-asker rating and asker gratuity--are increasing in contemporary answerer experience as measured by questions previously answered. This result holds with p < .001 in multiple specifications of the regression, with and without regressors of answer length and URL reference count. An answerer with ten more questions of contemporary experience is 0.28% more likely to be rated a 5 (Table 5, column (3)) and receives gratuities of $0.025 larger per question (Table 5, column (4)). This is prima facie evidence of learning on the job.

In principle, the higher ratings of more experienced answers could result from a selection effect wherein higher quality answerers both enjoy higher ratings and elect to participate more or longer. To test this theory, I limit analysis to each answerer's initial ten answers (or fewer for answerers who dropped out before answering ten questions) (7); a selection effect would imply that answerers who ultimately participate more enjoy higher ratings at the outset. But I find no statistically significant coefficient on the indicator variable for ultimately answering more than ten questions--giving no evidence for a selection effect in answerer retention. See Table 6.

Answerers adjust their behavior to suit asker preferences for length and URL count. More experienced users tend to submit answers that users view more favorably--a positive coefficient on experience when predicting answer length and when predicting URL count. This result holds across all answerers as well as among new answerers (e.g., regressions restricted to each answerer's first ten answers) and among drop-out answerers (who ultimately answer ten or fewer questions). See Table 7.

VII. HOURLY PAY AS A FUNCTION OF EXPERIENCE

In general, it is difficult to measure the amount of time an answerer invests in answering a question. Answerer work time is unobserved even to Google and to the asker--for the answerer merely posts an answer into the Google Answers system, without explicitly reporting time spent on the task. However, answerer effort can be inferred from time that elapses between when a question is asked and when it is answered. Certainly elapsed time is an upper bound on an answerer's time. But group norms and the limited "lock" function induce a race among answerers. (8) As a result, an answerer typically begins to work on a question soon after it is posted, and submits the answer as soon as the answer is complete.

Even a self-interested answerer does not merely minimize effort expenditure (minutes per question); a more sensible objective would be to maximize pay per minute. I therefore form a variable that gives the quotient of answer price (in dollars) divided by minutes of work (formed as described above). I restrict analysis in this section to questions for which an answer was posted within the maximum lock period plus 60 minutes--intended to capture only those questions for which the race condition (described above) was binding and for which the delay between asking and answering a question gives a good measure of answerer effort.

Even with the restriction to quickly answered questions, elapsed time from asking to answering somewhat overstates answerer effort because an answerer may not notice a question immediately after submission, and because an answerer may pause for other projects or interruptions while preparing an answer. The result of this overstatement of effort is a corresponding understatement of levels of pay per minute. However, I have no reason to think the bias varies substantially across different kinds of questions or answerers, so this overstatement of effort does not suggest bias in my estimation of factors affecting pay per minute.

Measuring answerer time as detailed above, the base pay for answerers with no experience is on the order of $0.127 per minute, or about $7.61 per hour. See Table 8.

Regressing pay per minute on answerer experience, I find a statistically significant positive coefficient. (9) The magnitude of this coefficient indicates that, all else equal, another question of answerer experience causes an answerer to earn about $0.0004 more per minute, or about $0.02 more per hour.

Answer length and URL count are significantly positively associated with pay per minute. The answerers who provide longer answers earn higher pay per minute even after controlling for experience. If longer answers are presumed to require more minutes of effort, (10) then the positive association between high pay per minute and long answer length means that some answers are exogenously so much more productive that they can both provide higher quality answers and simultaneously nonetheless earn higher pay per minute. Alternatively, following the suggestion above that longer answers could be less valuable to askers (who might value brevity), the higher pay per minute of long answers might be taken to reflect answerer rushing (e.g., foregoing editing could cause longer answers, faster answers, and higher answerer pay per minute).

VIII. SPECIALIZATION

As answerers gain experience, they often specialize in particular kinds of questions. To measure specialization, I consider the number of distinct question categories in which an answer has recently provided answers. I group categories into "one-digit codes" ("Arts and Entertainment," "Business," "Computers," and so forth) and "two-digit codes" (e.g., within Business: "Advertising," "Accounting," and "Consulting," among others). I measure the number of distinct one- and two-digit codes represented among an answerer's most recent ten answers, reckoned as of the time of each answer submitted. (11) For ease of interpretation, I form a specialization index where a larger value reflects greater specialization: The specialization index is ten minus the number of distinct categories associated with the answerer's most recent ten answers. I use a specialization measure based on one-digit category codes except where otherwise indicated.

I find a statistically significant positive coefficient on the specialization index when predicting experience, implying that on the whole, more experienced answerers are more specialized. (12) See Table 9.

I find statistically significant positive coefficients on the specialization index when predicting ratings and when predicting gratuities. More specialized answerers earn higher ratings and greater gratuities, even when controlling for answerer experience. A reviewer who is one unit more specialized (whose prior ten reviews have stayed within one fewer one-digit category codes) has a 4.8% greater probability of obtaining a 5 on a review, and a $0.19 larger tip, on average. See Table 10, columns (3) and (4).

I find statistically significant negative coefficients on specialization when predicting pay per hour, implying that more specialized answerers earn less per hour. See Table 11, columns (1) and (2). When an answerer insists on staying within a particular substantive field, it seems the answerer foregoes opportunities in other fields, however lucrative those opportunities might be. This theory is borne out by the third column of Table 11, showing a negative relationship between specialization and average price of answered questions.

Thus, it seems answerer specialization has mixed effects. For question askers, specialization is associated with favorable ratings, making specialization a positive attribute. (Intuitively: "my question was answered by an expert in this field.") But from answerers' perspective, specialization could be recast as lack of versatility--an inability or disinclination to answer whatever questions arise, and therefore a drag on earnings.

With this understanding of answer quality vis-a-vis answerer specialization, Google could improve answer quality by requiring answerers to stay within their one-digit or two-digit category or categories of expertise. Such a rule would prohibit answerers from straying to give answers that are profitable to answerers, but that on average are less well-received by askers.

IX. COMPENSATING DIFFERENTIALS: DAY OF WEEK, HOUR OF DAY

From the perspective of answerers, Google Answers at any instant provides a menu of opportunities--questions that could be answered to earn the payments offered by askers. Availability depends both on what questions have been submitted recently and on what questions have already been answered. Because questions tend to be submitted at certain times of day and on certain days of the week, and because answerers are not always on hand to answer new questions immediately, Google Answers opportunities vary somewhat over the course of each week. Compensating differentials arise from systematic imbalances between the dates and times at which questions tend to be asked versus when they tend to be answered.

Summary statistics indicate several notable day-of-week effects. Sundays have the shortest average lag between when questions are asked and when answered, and (after Saturday) both the second-lowest wages per minute of questions answered and the second-fewest number of questions asked--all suggesting a relative lack of Sunday work for answerers, relative to the number of answerers available. Mondays have the highest pay per minute, the second-highest number of questions asked, and the second-longest delay until answer--suggesting a relative lack of Monday answerers compared with the number of questions asked. These results are consistent with question askers who tend to follow the business week, and with answerers who tend to participate on weekends. See Table 12.

Regressions of pay per minute on dummy variables for Sunday and Monday bear out the day-of-week effects described above: The Sunday variable takes a statistically significant negative coefficient when predicting pay per minute; Monday, positive. See Table 13.

Summary statistics indicate that questions and answers also differ dramatically according to the time of day when posted. There are numerous notable and statistically significant effects, most of them intuitive: For example, questions posted at 8, 9, and 10 p.m. have the fastest answers, while questions posted between 2 a.m. and 7 a.m. have the slowest answers. (13) Although asking and answering questions are both less frequent during the night, disproportionately fewer answers are provided at night relative to the number of questions asked during this period.

Answerers earn a compensating differential for answering questions during the business day. I define the business day as Monday through Friday between 7 a.m. and 3 p.m. Pacific time. (14) A significant positive coefficient results from regressing pay per minute on an indicator reporting whether a question was answered during the business day. That coefficient remains positive even after controlling for answerer experience. However, the coefficient on interaction of business day and experience is insignificant, suggesting that the compensating differential for answering questions during the business day is no larger for experienced answerers. See Table 14.

These results indicate that answerers receive a compensating differential--higher pay per minute--in exchange for answering questions during the business day. Such compensation makes sense in equilibrium because many answerers have more favorable outside employment options during the business day. The net effect is likely larger than Table 14 indicates because business day answers are also more than twice as likely to receive a gratuity (15% rather than 7%) and therefore receive larger gratuities ($1.34 in expectation vs. $0.61). Because gratuities are publicly posted, experienced answerers can discern that business day answers are more likely to receive gratuities.

These differential values of answerer pay per minute seem to embody compensating differentials, not arbitrage opportunities or deviations from equilibrium. To obtain the higher pay per minute, answerers must modify their behavior by answering questions during the business day, a costly change for answerers who have other obligations during the business day. Indeed, more experienced answerers do not tend to take advantage of the compensating differentials. Table 15 indicates that more experienced answerers are significantly less likely to answer questions during the business day, whereas more experienced answerers are not significantly more likely than other answerers to answer questions on Monday and are not significantly less likely to answer on Sundays. These findings match widespread sentiment that the "graveyard shift" is undesirable in traditional industries, despite the additional pay it may offer.

X. DISCUSSION

The Internet lets askers and answerers find each other easily and at modest cost--providing a service of value to both groups. Within the Google Answers data analyzed here, less than half a million dollars bought answers to more than 24,000 questions. Askers' gratuities, comments, and repeated visits indicate their substantial satisfaction. Answerers also appear to be pleased: After Google shut the Answers service in 2006, some answerers built a new site, Uclue, which continues the Google Answers approach with only slight adjustments to system rules.

Experience at Google Answers also informs design of a variety of other sites. Numerous "user-generated content" sites now seek to assemble materials from a large number of independent contributors, with or without monetary compensation. Experience at most such sites is mixed: Occasionally a stunning performance attracts millions of YouTube views, but most contributions present less striking quality. To these sites, Google Answers offers a remarkable success: low fees suffice to inspire answerers to prepare custom offerings, to users' specific requests, on tight timetables and with high quality.

Google Answers also provides a useful data set revealing the effects of experience, specialization, and desirable/undesirable work hours. Although these factors have been widely studied in traditional labor markets, they grow in importance as the Internet's growth makes it increasingly feasible for certain kinds of work to occur primarily or even solely online.
ABBREVIATION

OLS: Ordinary Least Squares


doi:10.1111/j.1465-7295.2011.00414.x

REFERENCES

Adamic, L. A., J. Zhang, E. Bakshy, and M. S. Ackerman. "Knowledge Sharing and Yahoo Answers: Everyone Knows Something." WWW2008: Proceedings of the 17th International Conference on World Wide Web. New York: ACM, 2008, 665-74.

Chen, Y., T.-H. Ho, and Y.-M. Kim. "Knowledge Market Design: A Field Experiment on Google Answers." Journal of Public Economics Theory, 12(4), 2010, 641-64.

Ellison, G., and S. Fisher Ellison. "Search, Obfuscation, and Price Elasticities on the Internet." Econometrica, 77(2), 2009, 427-52.

Goolsbee, A., and J. Chevalier. "Price Competition Online: Amazon Versus Barnes and Noble." Quantitative Marketing and Economics, 1(2), 2003, 203-22.

Jovanovic, B., and Y. Nyarko. "Stepping Stone Mobility." NBER Working Paper 5651, July 1996.

Kuhn, P. "Internet Job Search and Unemployment Durations." American Economic Review, 94(1), 2004, 218-32.

Rafaeli, S., D. Raban, and G. Ravid. "Social and Economic Incentives in Google Answers," in Google's Growth, A Success Story, edited by K. Sangeetha and P. Sivarajadhanavel. Hyderabad, India: ICFAI University Press, 2007, 150-61.

Regner, T. "Why Voluntary Contributions? Google Answers!" Technical Report Working Paper No. 05/115, Centre for Market and Public Organisation, University of Bristol. January 2005.

(1.) I do not observe questions Google removed, for example due to profane language.

(2.) Google Answers remained operational through November 30, 2006, at which point Google "retired" the service and ceased accepting new questions. As of that date, Google Answers hosted 53,087 questions. It therefore appears that my data truncation omits approximately 19% of questions ultimately submitted. Rafaeli, Raban. and Ravid (2005) examine Google Answers partially overlapping with my sample but somewhat later, and find little difference in price or rating.

(3.) Google Answers lock terms have changed somewhat over time. I lack precise information about prior rules previously and the dates of transition between rules. However, my sense is that the changes are small relative to the other effects discussed.

(4.) The reason why askers provide such gratuities is itself something of a puzzle. Gratuities might have reputational benefits to askers, including increasing the expected total revenue to answers who answer the asker's future questions. But Google Answers' search function does not facilitate searching by asker, that is to determine whether a given answer is one who paid tips in the past and might therefore be likely to tip in the future. Nonetheless, gratuities are not mere follies of novice askers; tip amount is positively associated with asker experience (p < .001). If askers are spending others' money, agency problems might explain gratuities. But gratuities are only weakly positively associated with submitting a question during the business day, one possible method of distinguishing business askers from personal askers. Regner (2005) and Adamic et al. (2008) further explore incentives to tip.

(5.) Of course, answer length and URL count need not always be positively associated with answer quality: Sometimes, a more concise answer may be preferable.

(6.) The result holds in ordered probit regressions, in OLS regressions of ordinal rating (1 to 5), in regressions which transform ordinal ratings via the inverse logit function, in probit regressions for which rating is expressed as a Boolean value of 5 versus otherwise, and in probit regressions for which rating is a Boolean of 4 or higher versus otherwise.

(7.) Throughout, regressions with other thresholds yielded qualitatively similar results.

(8.) Google Answers provides a "lock" function that lets one answerer obtain the exclusive right to answer a question within a limited time period. However, an answerer may only lock two questions at a time. A busy answerer therefore seeks to answer questions promptly to free lock capacity and to remain available to accept additional questions when available. Furthermore, locking two questions at once is unusual and disfavored by group norms. See Google Answers: Researcher Guidelines, "Can l lock more than one question at a time?" http://answers. google.com/answers/researcherguidelines.html#1ocktwo.

(9.) This coefficient, like others predicting pay per minute, remains significant when regressions are run in logs of pay per minute rather than in levels.

(10.) The data show a clear positive association between answer length and minutes worked: The OLS regression of answer length on minutes worked yields a positive coefficient with p < .001. This effect remains even when controlling for price and rating.

(11.) This result also holds when distinct categories are counted among a user's most recent 5 or most recent 20 questions. To avoid bias from each answerer's initial answers (for which prior categories of answers would necessarily be biased downwards by the small number of prior answers), analysis only considers answers beyond an answerer's first 10 answers, or first 5 or first 20.

(12.) For purposes of this paragraph, I limit analysis to each answerer's first 100 answers. The few answerers who have answered more than 100 questions defy the relationship described here. An answerer would have to accept questions from a broader swath of categories, in order to answer so many questions.

(13.) All times are U.S. Pacific time.

(14.) I lack information about answerers' home time zones. This interval reflects my attempt to produce a single representative business day, based on my understanding that most askers are based in North America and therefore tend to follow its time zones and business day.

Edelman: Harvard Business School, Baker Library 445, 25 Harvard Way, Soldiers Field. Boston MA 02163. Phone 617-496-2055, E-mail bedelman@hbs.edu
 TABLE 1
 Summary Statistics

Google Answers began April 2002
Data ends November 2003
Number of questions asked 43,262
Number of questions answered 24,290
Number of distinct question askers 24,724
Number of distinct question answerers 534
Average dollar value of answered questions $18.91
Maximum dollar value of answered $200.00
 question
Minimum dollar value of answered $2.00
 questions
Total revenues to answerers from all $344,495.46
 answered questions
Total revenues to Google from all questions $136,012.82
Max questions answered by a single 960
 answerer
Max dollar value of answers by a single $17,495.60
 answerer
Proportion of answered questions receiving 15.6%
 gratuities
Average gratuity amount (among answers $8.77
 with gratuities)

 TABLE 2
Answer Ratings (Among Rated, Answered
 Questions)

Rating Count Frequency

5 343 0.014
4.5 16183 0.666
4 7036 0.290
3.5 483 0.020
3 138 0.006
2.5 17 0.001
2 12 0.000
1.5 4 0.000
1 9 0.000

 TABLE 3
 What Askers Value: Length, URL References

 (1) (2) (3)
 Ordered Ordered Ordered
 Probit: Rating Probit: Rating Probit: Rating

Answer length 5.03E-06 9.13E-06
 (1.84e-06) ** (2.51e-06) **

Number of URLs 4.78E-05 3.08E-03
 (9.36e-04) (1.23E-03) **

 (4) (5) (6)
 Probit: Rating Probit: Rating Probit: Rating
 [greater than or [greater than or [greater than
 equal to] 4 equal to] 4 or equal to] 4

Answer length 8.25E-07 8.06E-07
 (2.82e-07) ** (3.69e-07) *

Number of URLs 2.87E-04 1.45E-05
 (1.47e-04) ** (1.87E-04)

 (7)
 Probit: Rating (8)
 [greater than or Probit: Rating = (9)
 equal to] 4.5 5 Gratuity

Answer length 2.51E-07 1.01E-06 1.20E-04
 (3.85e-07) (4.46e-07) * (5.83e-06) **

Number of URLs

Constant 0.831
 (0.048) **

 (10) (11)
 Gratuity Gratuity

Answer length 1.05E-04
 (6.93e-06) **

Number of URLs 4.70E-02 1.53E-02
 (3.26e-03) ** (3.86e-03) **

Constant 1.011 0.782
 (0.048) ** (0.050) **

* Significant at 5%; ** significant at 1%.

 TABLE 4
 What Askers Value: Time

 (1) (2) (3)
 Ordered Ordered Ordered
 Probit: Rating Probit: Rating Probit: Rating

Answer time lapse -1.08E-07 -1.69E-07 -1.65E-07
 (in minutes) (9.51e-07) ** (1.26e-06) ** (1.26e-06) **

Answer time 4.00E-11 3.89E-11
 lapse (2) (5.69e-12) ** (5.70e-12) **

Answer length 7.57E-07
 (1.22e-06)

Number of URLs -3.56E-03
 (6.85E-04) **

 (4)
 Probit: Rating (5)
 [greater than Probit:
 or equal to] Rating = 5

Answer time lapse -1.27E-07 -7.17E-06
 (in minutes) (5.98E-07) (5.91E-07) **

Answer time 7.60E-12 1.70E-11
 lapse (2) (1.68E-11) (2.51E-12) **

Answer length 7.98E-07 2.41E-06
 (3.69e-07) * (5.52e-07) **

Number of URLs 1.48E-05 -1.02E-03
 (1.80E-04) (3.12E-04) **

* Significant at 5%; ** significant at 1%.

 TABLE 5
 Change in Ratings with Experience

 (2)
 (1) Probit Rating:
 Ordered [greater than
 Probit: Rating or equal to] 4

Contemporary experience 1.06E-03 1.05E-04
 (6.78e-05) ** (1.06e-05) **

Constant

 (3)
 Probit: (4)
 Rating = 5 Gratuity

Contemporary experience 2.76E-04 2.46E-03
 (1.94e-05) ** (2.52e-04) **

Constant 1.041 (5.28e-2) **

 (5) (6)
 Ordered Probit: Probit: Rating
 Rating [greater than
 or equal to] 4

Contemporary experience 1.06E-03 1.03E-04
 (6.80e-05) ** (1.06e-05) **

Answer length 8.61E-06 7.26E-07
 (2.51e-06) ** (3.57e-07) *

Number of URLs -3.78E-03 -5.18E-05
 (1.22e-04) ** -1.69E-04

Constant

 (7)
 Probit: (8)
 Rating = 5 Gratuity

Contemporary experience 2.79E-04 2.23E-03
 (1.95e-05) ** (2.50e-04) **

Answer length 2.11E-06 1.05E-04
 (5.52e-07) ** (6.92E-06) **

Number of URLs -1.37E-03 1.38E-02
 (3.24e-04) ** (3.86E-03) **

Constant 0.502 (5.89E-02) **

* Significant at 5%; ** significant at 1%.

 TABLE 6
 Change in Ratings with Experience: Testing for
 Selection Effects (Among Each Answerer's
 First Ten Answers)

 (2)
 (1) Probit: Rating
 Ordered [greater than (3)
 Probit: Rating or equal to] 4 Gratuity

Contemporary 3.13E-02 2.34E-03 5.97E-02
 experience (8.44e-03) ** (1.54e-03) (2.12e-02) **

Future experience -4.08E-02 1.13E-02 -0.139
 [greater than or (6.43e-02) (1.19e-02) (0.162)
 equal to] 10

Constant 0.356
 (0.139) **

* Significant at 5%; ** significant at 1%.

 TABLE 7
 Change in Answer Characteristics with Experience

 (1) (2) (3) (4)
 Number of Number of Number of Answer
 URLs URLs URLs Length

Experience 0.004 0.102 0.039 1.589
 (0.000) ** (0.039) ** (0.111) (0.275) **

Constant 6.939 5.436 4.968 4,241.972
 (0.104) ** (0.211) ** (0.350) ** (57.751) **

Observations 24290 3970 978 24290

Restriction See notes See notes

 (5) (6)
 Answer Answer
 Length Length

Experience 92.960 130.033
 (38.481) * (95.878)

Constant 3,117.439 2,865.02
 (206.199) ** (301.982) **

Observations 3970 978

Restriction See notes See notes

Notes: Columns (1) and (4) consider all answered questions.
Columns (2) and (5) consider all answered questions for which
contemporary answerer experience was [less than or equal to] 10.
Columns (3) and (6) consider all answered questions for which
ultimate answerer experience remained [less than or equal to] 10.

* Significant at 5%; ** significant at 1%.

 TABLE 8
 Hourly Pay and Experience

 (1) (2) (3)
 Pay Per Pay Per Pay Per
 Minute Minute Minute

Experience 4.394e-05 3.868e-05
 (1.451e-05) ** (1.452e-05) **

Answer Length 1.649e-06 1.641e-06
 (4.007e-07) ** (4.006e-07) **

Number of URLs 6.173e-04 5.910e-04
 (2.231e-04) ** (2.233e-04) **

Constant 0.143 0.137 0.132
 (0.003) ** (0.003) ** (0.003) **

Observations 24098 24098 24098

Restriction

 (4) (5) (6)
 Pay Per Pay Per Pay Per
 Minute Minute Minute

Experience 4.683e-04 4.287e-04
 (1.186e-04) ** (1.190e-04) **

Answer Length 2.371e-06 2.283e-06
 (6.415e-07) ** (6.417e-07) **

Number of URLs 2.147e-04 1.570e-04
 (3.404e-04) (3.406e-04)

Constant 0.127 0.132 0.118
 (0.005) ** (0.004) ** (0.006) **

Observations 14483 14483 14483

Restriction Answerer contemporary experience [less than
 or equal to] 100

Notes: Columns (1) through (3) consider all answered questions,
whereas (4) through (6) consider only those answered questions
for which the answerer, at the time of answering the question,
had experience [less than or equal to] 100.

* Significant at 5%; ** significant at 1%.

 TABLE 9
 Change in Specialization with Experience

 (1) (2)
 Specialization: Specialization:
 One-Digit Two-Digit

Experience 3.64E-03 2.55E-03
 (.711e-04) ** (5.476e-04) **

Constant 2.037 4.83
 (0.030) ** (0.028) **

Notes: Results consider only those answered questions
for which the answerer, at the time of answering the
question, had experience [less than or equal to] 100.

* Significant at 5%; ** significant at 1%.

 TABLE 10
 Change in Ratings with Specialization

 (2)
 (1) Probit: Rating
 Ordered [less than or
 Probit: Rating equal to] 4

Specialization 6.21E-02 6.29E-02
 (1.01e-02) ** (1.35e-(13) **

Experience 1.55E-03 1.34E-03
 (5.91e-04) ** (7.56E-04)

Constant

 (3)
 Probit: (4)
 Rating = 5 Gratuity

Specialization 4.83E-02 0.193
 (8.09e-03) ** (0.038) **

Experience 8.22E-04 1.27E-02
 (4.78e-04) (2.27e-03) *

Constant 0.193 (0.141)

Notes: Results consider only those answered questions for which
the answerer. at the time of answering the question, had
experience [less than or equal to] 100.

* Significant at 5%; ** significant at 1%.

 TABLE 11
Specialization and Pay Per Minute, Average Question Price

 (1) (2)
 Pay Per Minute Pay Per Minute

Specialization -7.514e-03 -7.194e-03
 (1.032e-03) ** (1.072e-03) **

Experience 1.668e-05
 (1.505e-05)

Constant 0.170 0.167
 (0.004) ** (0.005) **

 (3) (4)
 Question Price Question Price

Specialization -4.044e-01 -4.435e-01
 (8.497e-02) ** (8.824e-02) **

Experience -2.038e-03
 (1.239e-03)

Constant 20.014 20.392
 (0.307) ** (0.383) **

* Significant at 5%: ** significant at 1%.

 TABLE 12
 Summary Statistics by Day of Week

 Avg Wage/ Avg Time Num Questions
Day Minute Diff Asked

Sunday 0.1351 2031.62 5,259
Monday 0.1632 2361.65 6,806
Tuesday 0.1536 2072.81 7,030
Wednesday 0.1526 2237.70 6,963
Thursday 0.1480 2198.25 6.696
Friday 0.1443 2533.73 5,808
Saturday 0.1442 2026.29 4,699

 TABLE 13
 Compensating Differentials by Day of Week

 (1) (2)
 Pay Per Minute Pay Per Minute

Is Sunday -1.63E-02 -1.64E-02
 (7.516e-03) * (1.93e-02)

Interact Experience > 10 and Sunday -4.83E-04
 (2.10E-02)

Is Monday

Interact experience > 10 and Monday

Experience >10 3.44E-02
 (6.786e-03) **

Constant 0.151 0.122
 (0.003) ** (0.006) **

 (3) (4)
 Pay Per Minute Pay Per Minute

Is Sunday

Interact Experience > 10 and Sunday

Is Monday 1.61E-02 -2.14E-02
 (6.598e-03) * (1.72E-02)

Interact experience > 10 and Monday 4.31E-02
 (1.861 e-02) *

Experience >10 2.79E-02
 (6.916e-03) **

Constant 0.147 0.124
 (0.003) ** (0.006) **

* Significant at 5%; ** significant at 1%.

 TABLE 14
 Compensating Differentials during the Business Day

 (1) (2) (3)
 Pay Per Pay Per Pay Per
 Minute Minute Minute

Business day 1.40E-02 1.43E-02 1.38E-02
 (4.866e-03) ** (4.866e-03) ** (6.234e-03) *

Experience 4.48E-05 4.34E-05
 (1.451e-05) ** (1.833e-05) *

Interact business 3.83E-06
 day and (3.00E-05)

 experience

Constant 0.144 0.138 0.138
 (0.003) ** (0.004) ** (0.004) **

* Significant at 5%; ** significant at 1%.

 TABLE 15
 Business Day Answers and Experience

Probit: Answer Posted During Business Day

Experience -1.59E-04
 (5.01e-05) **

** Significant at 1%.
COPYRIGHT 2012 Western Economic Association International
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2012 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Edelman, Benjamin
Publication:Economic Inquiry
Geographic Code:1USA
Date:Apr 1, 2012
Words:6193
Previous Article:Double majors: one for me, one for the parents?
Next Article:What is the probability your vote will make a difference?
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters