Pure Derivation of the Exact Fine-Structure Constant & as A Ratio of Two Inexact Metric Constants
Theorists at the Strings Conference in July of 2000 were asked what mysteries remain to be revealed in the 21st century Participants were invited to help formulate the ten most important unsolved problems in fundamental physics, which were finally selected and ranked by a distinguished panel of David Gross, Edward Witten and Michael DuffTheorists at the Strings Conference in July of 2000 were asked what mysteries remain to be revealed in the 21st century. Participants were invited to help formulate the ten most important unsolved problems in fundamental physics, which were finally selected and ranked by a distinguished panel of David Gross, Edward Witten and Michael Duff. No questions were more worthy than the first two problems respectively posed by Gross and Witten:
#1: Are all the (measurable) dimensionless parameters that characterize the physical universe calculable in principle or are some merely determined by historical or quantum mechanical accident and incalculable?
#2: How can quantum gravity help explain the origin of the universe?
A newspaper article about these millennial mysteries expressed some interesting comments about the #1 question. Perhaps Einstein indeed ?put it more crisply: Did God have a choice in creating the universe?? - which summarizes quandary #2 as well. While certainly the Eternal One ?may? have had a ?choice? in Creation, the following arguments will conclude that the reply to Einstein?s question is an emphatic ?No.? For even more certainly a full spectrum of unprecedented, precise fundamental physical parameters are demonstrably calculable within a single dimensionless Universal system that naturally comprises a literal ?Monolith.?
Likewise the article went on to ask if the speed of light, Planck?s constant and electric charge are indiscriminately determined - ?or do the values have to be what they are because of some deep, hidden logic. These kinds of questions come to a point with a conundrum involving a mysterious number called alpha. If you square the charge of the electron and then divide it by the speed of light times Planck?s (?reduced?) constant (multiplied by 4p times the vacuum permittivity), all the (metric) dimensions (of mass, time and distance) cancel out, yielding a so-called ?pure number? - alpha, which is just over 1/137. But why is it not precisely 1/137 or some other value entirely? Physicists and even mystics have tried in vain to explain why.?
Which is to say that while constants such as a fundamental particle mass can be expressed as a dimensionless relationship relative to the Planck scale or ratio to a somewhat more precisely known or available unit of mass, the inverse of the electromagnetic coupling constant alpha is uniquely purely dimensionless as the ?fine-structure number? a ~ 137.036. On the other hand, assuming a unique, invariantly discrete or exact fine-structure numeric exists as a ?literal constant,? the value must still be empirically confirmable as a ratio of two inexactly determinable ?metric constants,? h-bar and electric charge e (light speed c being exactly defined in the 1983 adoption of the SI convention as an integer number of meters per second.)
So though this conundrum has been deeply puzzling almost from its inception, my impression upon reading this article in a morning paper was utter amazement a numerological issue of invariance merited such distinction by eminent modern authorities. For I?d been obliquely obsessed with the fs-number in the context of my colleague A. J. Meyer?s model for a number of years, but had come to accept it?s experimental determination in practice, pondering the dimensionless issue periodically to no avail. Gross?s question thus served as a catalyst from my complacency; recognizing a unique position as the only fellow who could provide a categorically complete and consistent answer in the context of Meyer?s main fundamental parameter. Still, my pretentious instincts led to two months of inane intellectual posturing until sanely repeating a simple procedure explored a few years earlier. I merely looked at the result using the 98-00 CODATA value of a, and the following solution immediately struck with full heuristic force.
For the fine-structure ratio effectively quantizes (via h-bar) the electromagnetic coupling between a (squared) discrete unit of electric charge (e) and a photon of light; in the same sense an integer is discrete or 'quantized' compared to the ?fractional continuum? between it and 240 or 242. One can easily see what this means by considering another integer, 203, from which we subtract the 2-based exponential of the square of 2pi. Now add the inverse of 241 to the resultant number, multiplying the product by the natural log of 2. It follows that this pure calculation of the fine-structure number exactly equals 137.0359996502301?- which here (/100) is given to 15, but is calculable to any number of decimal places.
By comparison, given the experimental uncertainty in h and e, the NIST evaluation varies up or down around the mid 6 of ?965? in the invariant sequence defined above. The following table according gives the values of h-bar, e, their calculated ratio as and the actual NIST choice for a in each year of their archives, as well as the 1973 CODATA, where the standard two digit +/? experimental uncertainty is in bold type within parentheses.
year: h-bar=Nh*10^-34 Js e = Ne*10^-19 C h/e^2 = a = NIST value &??(SD):
2006: 1.054571.628(053) 1.602176.487(040) 137.035999.661 137.035999.679(094)
2002: 1.054571.680(18x) 1.602176.530(14x) 137.035999.063 137.035999.11o(46x)
1998: 1.054571.596(082) 1.602176.462(063) 137.035999.779 137.035999.76o(50x)
1986: 1.054572.66x(63x) 1.602177.33x(49x) 137.035989.558 137.0359895xx(61xx)
1973: 1.0545887xx(57xx) 1.6021892xx(46xx) 137.036043335 137.036.040(11x)
So it seems the NIST choice is roughly determined by the measured values for h and e alone. However (as explained at http://physics.nist.gov/cuu/Constants/alpha.html), by the 80?s interest shifted to a new approach that provides a direct determination by exploiting the quantum Hall effect, as independently corroborated with both theory and experiment of the electron magnetic-moment anomaly, thus reducing its already finer tuned uncertainty. Yet it took 20 years before an improved measure of the magnetic moment g/2-factor was published in mid 2006, where this group?s estimate for a was (A:) 137.035999710(96) - explaining the much reduced uncertainty in the new NIST list, as compared to that in h-bar and e. However, recently (B:) a numeric errorHowever, recently (B:) a numeric error (http://hussle.harvard.edu/~gabrielse/gabrielse/papers/2006/NewFineStructureConstant.pdf) in the initial QED calculation (A:) was discovered which shifted that value of a to (B:) 137.035999070(98).
Though it reflects a nearly identically small uncertainty, this assessment is clearly outside the NIST value concordant with estimates for h-bar and elementary charge, which are independently determined by various experiments. The NIST has three years to sort this out, but meantime face an embarrassing irony in that at least the 06-choices for h and e seem to be slightly skewed toward the expected fit for a! For example, adjusting the last three digits of the 06-data for h and e to accord with our pure a-number yields an unperceivable adjustment to e alone into the ratio h628/e487.065. Had the QCD error been corrected prior to the actual NIST publication in 2007, it rather easily could have been evenly adjusted to h626/e489; though questioning its coherency in the last 3-digits of a with respect to the comparative 02 and 98 data. In any case, far vaster improvements in multiple experimental designs will be required for a comparable reduction in error for h and e in order to settle this issue for good.
But again, even then no matter how ?precisely? metric measure is maintained, it?s still infinitely short of ?literal exactitude,? while our pure fs-number fits the present values of h628/e487quite precisely. In the former regard, I recently discovered a mathematician named James Gilson (http://www.maths.qmul.ac.uk/~jgg/page5.html) had also devised a pure numeric = 137.0359997867... nearer the revised 98-01 standard. Gilson contends he?s also calculated numerous parameters of the standard model such as the dimensionless ratio between the masses of a W and Z weak gauge boson. I know he could never construct a single ?Proof? employing equivalencies capable of deriving both Z and/or W masses per se from, so thus proven, precise masses of heavy quarks, Higgs fields or hadrons (http://ezinearticles.com/?The-Z-Boson-Mass-And-Its-Formula-As-Multiple-Proofs-In-One-Yummy-Bowl-Of-Pudding&id=757900), which themselves result from a single over-riding dimensionless tautology.
For the numeric discreteness of the fraction 1/241 allows one to construct physically meaningful dimensionless equations. If one instead took Gilson?s numerology, or the refined empirical value of Gabreilse et. al., for the fs-number, either would destroy this discreteness, precise self-consistency and ability to even write a meaningful numeric equation! By contrast, perhaps it?s then not too surprising that after I literally looked for and/or found the integer 241, and then derived the exact fine-structure numerical constant from the resultant Monolith Number, it took about only 2 weeks to calculate all six quark masses utilizing real dimensionless analysis and various fine-structured relations.
But as we now aren?t really talking about the fine-structure number per se any more than the integer 137, the result definitively answers Gross?s question. For those ?dimensionless parameters that characterize the physical universe? (including alpha) are ratios between selected metric parameters that lack a single unified dimensionless system of mapping from which all metric parameters like particle masses are derivable from set equations. The standard model gives one a single system of parameters, but no means to calculate or predict any one and/or all within a single system ? thus the experimental parameters are put in by hand arbitrarily. Final irony: I?m doomed to be demeaned as a ?numerologist? by the ?experimentalists? who can?t recognize a hard empirical proof for quark, Higgs, or hadron, masses that are used to exactly calculate the present standard for the most precisely known and heaviest mass in high energy physics. So contraire foolish ghouls: empiric confirmation is just the final cherry the chef puts on top before he presents a Pudding Proof no sane man can, or should, resist just because he could never assemble it himself, so instead makes a mimicked mess the real deal doesn?t resemble - for the base of this pudding is made from melons I call Mumbers, which are really just numbers, pure and simple!
Sean Sheeter is an independent theorist, geometer and author of 241-Mumbers: The Definitive Data for Fundamental Physics and Cosmology. Interested parties are encouraged to visit http://www.241mumbers.com and also explore our Sample Data & Proofs page that includes the body of the above reference.