Skip to content

T. Lasserre, The Reactor Antineutrino Anomaly

March 17, 2011

The speaker is part of a group which recently reassessed the flux of reactor neutrinos, which is a fundamental factor in the interpretation of short baseline reactor neutrino experiments. Their calculation showed in January this year that the flux had previously been underestimated by over 3%, with important consequences. I (T.D.) mentioned that result in my blog a few months ago. Here I will give a transcript of the talk given yesterday by Thierry Lasserre, apologizing for any cryptic sentences I typed down in a haste.

Lasserre started by clarifying that for neutrinos emitted by reactors the knowledge of their flux is the dominant source of systematics for single detector reactor neutrino experiments. The flux depends on the thermal power, the energy released per fisssions, and the isotopes that are emitting neutrinos; these include fissile isotopes like U235 and 238, polonium 239, and plutonium 241.

The neutrino spectrum of each fission is a factor entering the calculation, and it is the focus of the presentation. This is labeled “S_k(E)”. k labels the different isotopes. You have to also know the fraction of fissions from each isotope, and these are largely anticorrelated.

In S_k(E) enters the sum of fission product activities. These in turn are expressed as a sum of all beta branches of each fission product. To know the latter, one must use the theory of beta decay. There you have a normalization factor, a Fermi function that accounts for the Coulomb field in the nuclei, a phase space factor, and other correction factors for shape and weak magnetism. This calculation has to be done for all the branches in order to derive S_k(E).

Lasserre explained that there are two complementary approaches to compute the neutrino flux: he called them an “ab initio” calculation and an “integral measurement”. If you use the ab initio calculation you find that you have missing information in the database of fission product inventory, and cannot exceed a 10% uncertainty. But you can instead use an integral measurement: you use measurements of electron spectra from fissile isotopes radiating in the nuclear core, and you get the neutrino spectra. This effective approach can be mixed with the other one: there arises a hybrid which is the technique that Lasserre’s group adopted.

Accurate electron spectrum measurements were made at ILL (a very short baseline reactor experiment) in the 1980-89. They had a high-resolution magnetic spectrometer, and made extensive use of internal conversion electron lines to obtain a 1.8% measurement of the normalization. So ILL data are a unique reference to be met by any other measurement or calculation. One has to be careful, though, since if one has a problem with the electron kinetic energy at high values, it propagates everywhere else in the calculation.

Once you have the electron spectrum you have to convert it to get the neutrino spectrum. The spectrum contains 10,000 branches, but you do not know them. You may thus choose to fit 30 “effective” branches from the high E point to the lowest, and apply corrections, do some tricks, and associate a Z to your nuclei.

The last important point is the off-equilibrium effects. The electron spectra at ILL are obtained from at most 1.8 days of irradiation time. But neutrino reactor experimnets have irradiation time of many months. 10% of fission products have beta decay lifetime long enough to keep accumulating after several days, so you need a correction from simulation.

Once everything is accounted, this leads to the reactor antineutrino anomaly. The original paper was published by the group in January, and revised in February (1101.2755).

Up to now they see that they have a 3% increase in flux, but what they measure is not neutrinos, rather what is measured in reactor neutrino experiments is inverse beta decay. This process is very well known, but different theoretical predictions exist. The cross section can be written in terms of the emitted positron energy and momentum multiplied by a few correction factors; the expression is multiplied by a “pre-factor” k.
The pre-factor k is very important. It can be obtained from two approaches. kIt ran down over history, from 0.914E-42 cm^2 in 1981 to 0.956E-42 cm^2 in 2010 (using the neutron lifetime in the 2010 PDG). Lasserre pointed out that they actually anticipate in 2011 that they will obtain an estimate of 0.961 from a revision of 0.5% of the neutron lifetime.

The new cross section increases because of the increase of the flux, the off-equilibrium correction, etcetera. They reanalyze 19 results from single-experiment reactor neutrino detectors, with baseline smaller than 100 meters. They include the neutron lifetime revised number, and the new computed ratios go down.

For the error and correlations the guiding principles that they followed are to be conservative and to remain stable numerically. There is a 2% systematics correlated with all measurements. A fit produces mu=0.943, with a chisquared of 19.6 for 19 degrees of freedom. The deviation from unity is at 99.3% CL for a Gaussian, but we have the ratio of Gaussians, and so use Toy MCs, which reduces the significance to 98.6%.

There are thus three alternatives:

  1. their conversion calculations are wrong. Anchorage at ILL electron data is unchanged with respect to the old prediction
  2. there is a bias in all short baseline experiments: this is unlikely…
  3. the deficit is due to oscillations into steriles or other modes of oscillation.

The speaker mentioned the implications for the determination of the theta_13 angle: if you change the normalization in the flux, you may have an effect at distances of 1-2 km, which is not due to a non-zero value of the angle. This is shown by the figure below.

After the talk, there were a number of questions:

  • Art McDonald expressed his dissatisfaction with the fact that if one changes method to compute the flux and obtains a >3% difference, a systematic uncertainty should be associated to it. The speaker answered that the only change in the reanalysis is that they do a fit to all the beta branches; the envelope of the error is not changed in the procedure.
  • another attendee asked what value of g_a is used, given that that parameter is crucial in the calculation of the cross sections. The speaker explained that they used the PDG value. It was remarked that g_a influences a lot the result, since the “pre-factor” k depends on (1+3 (|g_a/g_v|)^2).
  • Another attendee made a few additional comments concerning the CHOOZ measurement, which used BUGEY4 data to effectively measure a ratio. He also argued that in the calculation of the spectra one might argue that if the energy scale is correct the normalization cannot be correct.
Advertisements
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: