Skip to content

A Vision for Neutrino and Particle Physics at the South Pole

March 17, 2017

20170317_103337Athayde Marcondes de Andra (right) presented PINGU and the south pole detectors for neutrino and particle physics. He started by discussing the issue of measuring neutrino oscillations with atmospheric neutrinos. If one does not assume unitarity of the PMNS matrix one has nine parameters to measure. For the third row in the matrix, one must look for tau neutrino appearance. Opera and SK measured it, and in both cases they saw too many tau neutrino events. Although not statistically significant, this warrants precision measurements.

In IceCUBE, one gigaton of ice is instrumented, optimized for TeV-PeV neutrinos. At its center there is DeepCore, 10 Mtons with denser instrumentation. This allows to study neutrinos at a lower energy threshold, and oscillations; the surrounding detectors provide good shielding and veto.

The first result of DeepCore was shown in Lake Louise this year. Based on a 3-year sample of data, they get a very competitive result for delta_m^2_23 and theta_23. One can improve the detector with more strings tightly packed, and study the tau neutrino appearance. A proposal to the NSF has been put forth.

The signal for tau neutrino appearance in IceCube-Gen2 Phase 1 is a dip in the L/E distribution at 2 to 4 in log(km/GeV) as a ratio to standard oscillation signal. The speaker showed a breakdown of systematic uncertainties one would be looking at. The results for tau appearance would give better than 10% on the flux.

Going forward beyond Phase-1, there is PINGU. Adding more strings one could see much more clearly the events of low energy. There are a number of analysis goals. For the hierarchy, the signal is a resonance effect due to matter effects. The speaker showed maps in cosine of zenith angle versus energy. They cannot differentiate the neutrinos and antineutrinos, and only rely on the flux. The precision they would get is a three-sigma measurement, but they can go beyond with some values of theta_23.



March 17, 2017

20170317_102143KM3NeT is a multi-site, deep-sea infrastructure. One is ORcA, the other is called ARCA. Juergen Brunner (left) gave an overview of the project. They had a letter of intent in 2016, which covers schedules, technical and physics issues. The experiment has been selected by the ESFRI roadmap for Research infrastructures.

ORCA is a volume of 5.7 Mton instrumented with 115 strings of PMT detectors. 18 modules per string, and 31 PMT tubes instrument each module. With this they can aim for directional information, uniform coverage, and good background rejection.

At energies of 3-4 GeV there is a resonance in the mantle from neutrinos going through the Earth. This can be studied by ORCA. The speaker showed the topology expected for electron and muon neutrinos in the sensitive volume. They expect, after triggering and atmospheric muon rejection, 17300 electron neutrinos and 24800 muon neutrinos, plus 3100 tau neutrinos and in addition 5300 neutral current events.

The angular resolution for shower and track events is similar. The energy resolution is better than 30% in the relevant energy range of several GeV. It is also shown to be almost Gaussian. Measuring the energy versus zenith angle allows to see the different patterns expected from the hierarchy difference. Brunner showed how this is blurred by the resolutions but still quite observable for both electron and muon neutrinos.

The speaker showed also the discrimination of track-like and shower-like events using a random forest classifier. With that, one can perform pseudoexperiments when fits are made to oscillation parameters, to determine the experimental sensitivity. After 3 years, three-sigma sensitivities on the mass hierarchy is achievable. But at the same time one may measure the two atmospheric parameters for 2-3 mixing. This is shown to favourably compare to NOvA and T2K sensitivities.

There is an additional physics program, which encompasses tests of unitarity of the PMNS matrix, exotic physics searches like steriles and non-standard interactions, earth tomography studies, observation of transient cosmic phenomena, supernovae monitoring, and Earth and Sea science. There is also a neutrino beam from Protvino that can be studied.

For tau-neutrino appearance, which aims to test the PMNS unitarity and BSM theories, one can get about 3000 charged-current events per year with full ORCA, constraining the rate within 10% in one year. This could be an early physics measurement from the detector.

The sensitivity to galactic supernovaes is very good – 80% of them could be measured by ORCA. The detector could also be used to try independent detection of dark matter, looking at tau-tau events at low wimp masses.

In the far future they are also thinking about a neutrino beam from Protvino, with a baseline of 2600km to the ORCA site.

Status and Potential of JUNO

March 17, 2017

antonelli2Vito Antonelli (left) discussed the JUNO experiment focusing on a restricted set of topics.

Since the value of theta_13 is relatively large, the corrections it produces are sensitive on the mass hierarchy “sign”. To see this one must look at the survival probability of electron antineutrinos, which is dependent on a contribution which is mass-hierarchy dependent. One may explicitate this by rewriting the decrease in survival probability by a phase dependent term that is opposite for the two hierarchies.

antonelli1 20170317_094420

A slide from the talk, showing the different spectra of electron antineutrino that can be obtained given the different hierarchies, in red and blue.

The value of baseline of JUNO (53km) was chosen to maximize its sensitivity to those effects. JUNO is a huge liquid scintillator tank, 20ktons of material, with 700m of rock overburden. Juno will study the inverse beta decay of antineutrino on protons, with the positron signal accompanied by a delayed neutrino capture signal.

The main requirements for JUNO were high statistics – not being very close to the source one needs a lot of mass. The key point for sensitivity to the wiggles in the rate is to have a very good energy resolution. A muon veto system was necessary to remove cosmogenic background.

As Juno looks at vacuum oscillations, it does not suffer from the uncertainty in the Earth density profile or the ambiguity in the CP violating phase. It also does not depend on the value of theta_13.

JUNO is a multi-purpose experiment, and it will be able to determine three mixing parameters with accuracy much improved with respect to currently known values; it will study supernova bursts and diffuse supernova neutrinos, as well as geoneutrinos and solar neutrinos. For mixing angles, the potential of JUNO is to go to subpercent accuracy. For supernova neutrinos, given a distance of 10 kparsecs, one expects thousands of events. For geoneutrinos, the main issue is to understand the radiogenic contributions from uranium and thorium to the total power of the Earth. This allows to distinguish different geological models. Here, for JUNO the big advantages will be its size and radiopurity. The signal will produce events between 1 and 2.5 MeV and they expect hundred of such events, well distinguished from backgrounds.


For solar model, JUNO will be able to look into the ratio of Berillium and Boron-8 fluxes, to study the metallicity problem. Antonelli showed spectra of the energy measurable by JUNO for an ideal radiopurity, allowing 7Be fluxes to be constrained. For the 8B flux, the main problem is to constrain the long-lived spallation radioisotopes.

The speaker then discussed the status of construction. The excavation work should be finished in a few months. A pilot plant has been installed to study the purification and the optical purity that can be obtained. Another issue concerns the photomultipliers, which were the object of the last presentation of Thursday. In addition to the large PMT’s, smaller ones will be interspersed and they will add redundancy and energy resolution.

Statistical and Systematical Issues for the Neutrino Mass Ordering

March 17, 2017

20170317_092435Luca Stanco (right) gave a critical look at statistical analyses currently made to distinguish the normal and inverted mass hierarchy of neutrinos. He in particular criticized the use of the delta chisquared test statistic in the comparison of the two hypotheses, claiming that Wilks theorem does not apply, the two hypotheses are not nested, and the observable does not distribute as a chisquare.

He also discussed the material contained in the poster by Matteo Tenti, discussed in another article in this blog. This is summarized in arXiv:1606.09454v3.

A final point was made on the issue of whether the MH determination is a discovery or an exclusion business. For a discovery, sensitivity is the estimate to quote; one would then need a 5-sigma result. Jowever tat process for MH can also be seen as an exclusion one, and then a 95% CL quotation could be sufficient.The estimator should however be properly computed and many toy MCs, to properly evaluate its probability density function.

In conclusion, Stanco gave an optimistic view: with the next generation of experiments the mass hierarchy will soon be known.

Friday Morning Program

March 17, 2017

Below is the program of this morning at Neutel 17.


Large-Area MCP-MPT and its Application at JUNO

March 16, 2017

20170316_184519Yifang Wang (right) gave a very different talk from the previous presentations. He discussed how they chose the photomultiplier tubes for JUNO.

A few years ago they studied the possible experiments to constrain the mass hierarchy. They understood that they needed a good energy resolution in their detector. Comparing with KamLAND, the largest liquid scintillator available at the time, they were asking for a gain of a factor of 5 in photoelectrons per MeV. This boiled down to a needed increase of a factor of two alone in the PMT collection efficiency and quantum efficiency.

There were a number of proposals for phototubes in 2009. A design with a reflective photocathode was by UC Davis; another idea was to replace dynodes by a Scintillator and APD. Large-area picosecond photodetectors were also proposed. Then in 2008 there was a proposed super-bialkali photocathode with high quantum efficiency. Hamamatsu had flyers claiming that their photocathodes could have the super-bialkali (SBA)-grade quantum efficiency. But there was no product to buy and test. The price was also a unknown, as well as the quantum efficiency that would be possible for very large PMTs.

At the end they came up with a new proposal: instead of a phototube with only a transmissive photocathode, you put a transmissive photocathode at the top and another reflective photocathode layer at the bottom. This could fit the bill. However, nobody knew how to do them this way.


The design started with a 8-inch prototype, moving then to a 20-inch one. The phototube is coupled with a micro-channel plate, whose parameters need to be optimized for the assembly. The photocathode is also very complex, with thin layers of K-Cs-Sb. 20-inch phototubes can only be done by artisanal glass-blowers.

The production of prototypes saw many issues: the gain of the MCPs is not constant. Also, the collection efficiency varies a lot over the incident angle. They had to come up with a new design, which only uses one set of MCP plate, with a new surface treatment, with a collection efficiency highly improved (from 60% to 100%!) by having electron hitting the surface and not the holes have a chance of getting back in the hole all the same. This has worse timing but better collection efficiency.

The performances indicate a 26% quantum efficiency, uniform, a small dark rate, and high efficiency. The down side is the 6 ns spread in the timing resolution. Decided to purchase the phototubes for JUNO based on these performances and risks. Finally they decided to purchase 15k (75%) of the MCP-PMTs from NNVT, and 5k Dynodes (25%) from Hamamatsu.

NNVT started the production line, planning to produce 7k such phototubes per year. Their pilot production is good. In summary, driven by the JUNO requirements, a new type of PMT has been designed. Potentially its performance is better than ones based on dynodes.








Cosmogenic Neutrinos at the Pierre Auger Observatory

March 16, 2017

20170316_181736Ines Valino (left) reported on the studies of ultra-high-energy neutrinos from cosmic rays. Cosmogenic neutrinos come from the interaction of ultra high-energy cosmic ray (UHECR) protons above 50 EeV with CMB photons via the GZK mechanism. They can travel in a straight line undeflected by magnetic field, so they point back directly to their originating source. They can thus reveal the sources of UHECRs at cosmological distances.

Two journal articles: PRD 9, 092008 (2015) and PRD 94, 12207 (2016) describe the latest results on the matter by the Pierre Auger Collaboration. The observatory is located in Malargue, Argentina. It is an array of 1600 water CHerenkov stations spanning a surface of 3000 km^2. The area is overlooked by 4 fluorescence telescopes. 10% of the cosmic rays are measured simultaneously by the two sets of devices.

250px-layout_of_pierre_auger_observatory-svgPierre Auger (see map, right) is not a dedicated neutrino observatory, as the main aim is directly to study UHECRs, but UHE neutrinos induce showers that can be distinguished from backgrounds. The identification concept is simple: it exploits the small cross section of neutrinos with matter. While protons, nuclei, and photons interact shortly after they enter the atmosphere, neutrinos can interact at a bigger depth, and can initiate deep showers close to the ground. The electromagnetic component is thus larger. So the neutrino-induced events are characterized by inclined showers with EM component, so-called “young showers”.


Mainly two types of neutrinos are searched in these  showers: ones traveling slightly upward, when they skim the earth and interact, producing a shower close to the detector. The other ones are downward neutrinos (see figure above). Additionally, they also consider double-bang showers initiated by tau neutrinos, and down-going showers initiated by tau neutrinos as well.

The reconstruction of showers is not optimized for horizontal showers, so this needs a dedicated treatment. Observables are the elongated footprint, the apparent velocity of propagation of the shower front at the ground, along the major shower axis; and the reconstructed incident angle.


They applied identification criteria blindly to their data, and found no neutrino candidate events in any of the analyses so far. So they can put a limit to the flux. At 90% CL they exclude 2.39 or more events for the Poisson mean of the rate in the analyzed data after selection cuts.

Finally the speaker discussed the gravitational wave signals observed by Ligo. General consensus is that binary black-hole mergers do not produce an electromagnetic or neutrino counterpart; however there was a signal reported by Fermi coincident with the Ligo signal. So there was some interest to look for neutrinos matching those signals. Auger selected a 500 second window around the two events, plus a one-day window for an “afterglow” search.

As the sensitivity of the Auger observatory is limited to large zenith angles, the region of the sky to which there is sensitivity varies in time. In the case of the first merger event there was little overlap in field of view; in the second case there was good overlap.  No events were seen in coincidence in either time window for the second GW signal however. They obtained constraints in the flux which are declination-dependent. These translate in a limit of less than 0.5 to 3.0 solar masses emitted as neutrinos as a function of the source declination. This is based on neutrinos of energy above 10^17 eV.