1. General notes about information in this file
all the data. As such the background rates are about 1/20 too low. Those that are generated with all data (e.g. sec. 29.1.11.5 for the latest at the moment) use the correct numbers. As does any produced after the above date.
: All background rate plots that are contained in this file in which only the non tracking data was used, used the wrong time to normalize by. Instead of using the active shutter open time of the non tracking part, it used the active time of2. Reminder about data taking & detector properties
Recap the data InGrid data taking campaign.
Run-2: October 2017 - March 2018
Run-3: October 2018 - December 2018
Solar tracking [h] | Background [h] | Active tracking [h] | Active tracking (eventDuration) [h] | Active background [h] | Total time [h] | Active time [h] | Active [%] | |
---|---|---|---|---|---|---|---|---|
Run-2 | 106.006 | 2401.43 | 93.3689 | 93.3689 | 2144.67 | 2507.43 | 2238.78 | 0.89285842 |
Run-3 | 74.2981 | 1124.93 | 67.0066 | 67.0066 | 1012.68 | 1199.23 | 1079.6 | 0.90024432 |
Total | 180.3041 | 3526.36 | 160.3755 | 160.3755 | 3157.35 | 3706.66 | 3318.38 | 0.89524801 |
Ratio of tracking to background: 3156.8 / 159.8083 = 19.7536673627
Calibration data:
Calibration [h] | Active calibration [h] | Total time [h] | Active time [h] | |
---|---|---|---|---|
Run-2 | 107.422 | 2.60139 | 107.422 | 2.60139 |
Run-3 | 87.0632 | 3.52556 | 87.0632 | 3.52556 |
solar tracking | background | calibration | |
---|---|---|---|
Run-2 | 106 h | 2401 h | 107 h |
Run-3 | 74 h | 1125 h | 87 h |
Total | 180 h | 3526 h | 194 h |
These numbers can be obtained for example with ./../../CastData/ExternCode/TimepixAnalysis/Tools/writeRunList/writeRunList.nim by running it on Run-2 and Run-3 files. They correspond to the total time and not the active detector time!
The following detector features were used:
- \(\SI{300}{\nano\meter} \ce{SiN}\) entrance window available in Run-2 and Run-3
central InGrid surrounded by 6 additional InGrids for background suppression of events
available in Run-2 and Run-3
recording analog grid signals from central chip with an FADC for background suppression based on signal shapes and more importantly as trigger for events above \(\mathcal{O}(\SI{1.2}{\kilo\electronvolt})\) (include FADC spectrum somewhere?)
available in Run-2 and Run-3
- two veto scintillators:
- SCL (large "horizontal" scintillator pad) to veto events from cosmics or induced X-ray fluorescence photons (available in Run-3)
- SCS (small scintillator behind anode plane) to veto possible cosmics orthogonal to readout plane (available in Run-3)
As a table: Overview of working (\green{o}), mostly working (\orange{m}), not working (\red{x}) features
Feature | Run 2 | Run 3 |
---|---|---|
Septemboard | \green{o} | \green{o} |
FADC | \orange{m} | \green{o} |
Veto scinti | \red{x} | \green{o} |
SiPM | \red{x} | \green{o} |
2.1. Calculate total tracking and background times used above
UPDATE: These numbers in this section are also now outdated. The most up to date are in ./../../phd/thesis.html. Those numbers appear in the table in the section above now!
The table above is generated by using the ./../../CastData/ExternCode/TimepixAnalysis/Tools/writeRunList/writeRunList.nim tool:
writeRunList -b ~/CastData/data/DataRuns2017_Reco.h5 -c ~/CastData/data/CalibrationRuns2017_Reco.h5
writeRunList -b ~/CastData/data/DataRuns2018_Reco.h5 -c ~/CastData/data/CalibrationRuns2018_Reco.h5
This produces the following table:
Solar tracking [h] | Background [h] | Active tracking [h] | Active background [h] | Total time [h] | Active time [h] | Active % | |
---|---|---|---|---|---|---|---|
Run-2 | 106.006 | 2391.16 | 92.8017 | 2144.12 | 2497.16 | 2238.78 | 0.89653046 |
Run-3 | 74.2981 | 1124.93 | 67.0066 | 1012.68 | 1199.23 | 1079.6 | 0.90024432 |
Total | 180.3041 | 3516.09 | 159.8083 | 3156.8 | 3696.39 | 3318.38 | 0.89773536 |
(use org-table-sum C-c +
on each column to compute the total).
2.1.1. Outdated numbers
The numbers below were the ones obtained from a faulty calculation. See ./../journal.org#sec:journal:2023_07_08:missing_time
These numbers yielded the following table:
Solar tracking [h] | Background [h] | Active tracking [h] | Active background [h] | Total time [h] | Active time [h] | Active % | |
---|---|---|---|---|---|---|---|
Run-2 | 106.006 | 2401.43 | 94.1228 | 2144.67 | 2507.43 | 2238.78 | 0.89285842 |
Run-3 | 74.2981 | 1124.93 | 66.9231 | 1012.68 | 1199.23 | 1079.60 | 0.90024432 |
Total | 180.3041 | 3526.36 | 161.0460 | 3157.35 | 3706.66 | 3318.38 | 0.89524801 |
Run-2:
./writeRunList -b ~/CastData/data/DataRuns2017_Reco.h5 -c ~/CastData/data/CalibrationRuns2017_Reco.h5
Type: rtBackground total duration: 14 weeks, 6 days, 11 hours, 25 minutes, 59 seconds, 97 milliseconds, 615 microseconds, and 921 nanoseconds In hours: 2507.433082670833 active duration: 2238.783333333333 trackingDuration: 4 days, 10 hours, and 20 seconds In hours: 106.0055555555556 active tracking duration: 94.12276972527778 nonTrackingDuration: 14 weeks, 2 days, 1 hour, 25 minutes, 39 seconds, 97 milliseconds, 615 microseconds, and 921 nanoseconds In hours: 2401.427527115278 active background duration: 2144.666241943055
Solar tracking [h] | Background [h] | Active tracking [h] | Active background [h] | Total time [h] | Active time [h] |
---|---|---|---|---|---|
106.006 | 2401.43 | 94.1228 | 2144.67 | 2507.43 | 2238.78 |
Type: rtCalibration total duration: 4 days, 11 hours, 25 minutes, 20 seconds, 453 milliseconds, 596 microseconds, and 104 nanoseconds In hours: 107.4223482211111 active duration: 2.601388888888889 trackingDuration: 0 nanoseconds In hours: 0.0 active tracking duration: 0.0 nonTrackingDuration: 4 days, 11 hours, 25 minutes, 20 seconds, 453 milliseconds, 596 microseconds, and 104 nanoseconds In hours: 107.4223482211111 active background duration: 2.601391883888889
Solar tracking [h] | Background [h] | Active tracking [h] | Active background [h] | Total time [h] | Active time [h] |
---|---|---|---|---|---|
0 | 107.422 | 0 | 2.60139 | 107.422 | 2.60139 |
Run-3:
./writeRunList -b ~/CastData/data/DataRuns2018_Reco.h5 -c ~/CastData/data/CalibrationRuns2018_Reco.h5
Type: rtBackground total duration: 7 weeks, 23 hours, 13 minutes, 35 seconds, 698 milliseconds, 399 microseconds, and 775 nanoseconds In hours: 1199.226582888611 active duration: 1079.598333333333 trackingDuration: 3 days, 2 hours, 17 minutes, and 53 seconds In hours: 74.29805555555555 active tracking duration: 66.92306679361111 nonTrackingDuration: 6 weeks, 4 days, 20 hours, 55 minutes, 42 seconds, 698 milliseconds, 399 microseconds, and 775 nanoseconds In hours: 1124.928527333056 active background duration: 1012.677445774444
Solar tracking [h] | Background [h] | Active tracking [h] | Active background [h] | Total time [h] | Active time [h] |
---|---|---|---|---|---|
74.2981 | 1124.93 | 66.9231 | 1012.68 | 1199.23 | 1079.6 |
Type: rtCalibration total duration: 3 days, 15 hours, 3 minutes, 47 seconds, 557 milliseconds, 131 microseconds, and 279 nanoseconds In hours: 87.06321031416667 active duration: 3.525555555555556 trackingDuration: 0 nanoseconds In hours: 0.0 active tracking duration: 0.0 nonTrackingDuration: 3 days, 15 hours, 3 minutes, 47 seconds, 557 milliseconds, 131 microseconds, and 279 nanoseconds In hours: 87.06321031416667 active background duration: 3.525561761944445
Solar tracking [h] | Background [h] | Active tracking [h] | Active background [h] | Total time [h] | Active time [h] |
---|---|---|---|---|---|
0 | 87.0632 | 0 | 3.52556 | 87.0632 | 3.52556 |
2.2. Shutter settings
The data taken in 2017 uses Timepix shutter settings of 2 / 32 (very long / 32), which results in frames of length ~2.4 s.
From 2018 on this was reduced to 2 / 30 (very long / 30), which is closer to 2.2 s. The exact reason for the change is not clear to me in hindsight.
[ ]
*NOTE: Add event mean duration by run (e.g. from ./../../CastData/ExternCode/TimepixAnalysis/Tools/outerChipActivity/outerChipActivity.nim) here to showcase!
2.3. Data backups
Data is found in the following places:
/data
directory on tpc19- on tpc00
- tpc06 is the lab computer that was used for testing etc., contains
data for the development, sparking etc.
Under
/data/tpc/data
it contains a huge amount of backed up runs, including the whole sparking history etc. It's about 400 GB of data and should be fully backed up soon. Otherwise we might lose it forever. - my laptop & desktop at home contain most data
2.4. Detector documentation
The relevant IMPACT form, which contains the detector documentation is
https://impact.cern.ch/impact/secure/?place=editActivity:101629 A PDF version of this document can be found at
The version uploaded indeed matches the latest status of the document in ./Detector/CastDetectorDocumentation.html, including the funny notes, comments and TODOs. :)
2.5. Timeline of CAST data taking
[-]
add dates of each calibration[X]
add Geometer measurements here[X]
add time of scintillator calibration- ref: https://espace.cern.ch/cast-share/elog/Lists/Posts/Post.aspx?ID=3420 and
- June/July detector brought to CERN
- before alignment of LLNL telescope by Jaime
- laser alignment (see )
- vacuum leak tests & installation of detector (see: )
- after installation of lead shielding
- Geometer measurement of InGrid alignment for X-ray finger run
- - : first X-ray finger run (not useful to determine position of detector, due to dismount after)
- after: dismounted to make space for KWISP
- Remount in September 2017 -
- installation from to
- Alignment with geometers for data taking, magnet warm and under vacuum.
- weekend: (ref: ./../Talks/CCM_2017_Sep/CCM_2017_Sep.html)
- calibration (but all wrong)
- water cooling stopped working
- next week: try fix water cooling
- quick couplings: rubber disintegrating causing cooling flow to go to zero
- attempt to clean via compressed air
- final cleaning : wrong tube, compressed detector…
- detector window exploded…
- show image of window and inside detector
- detector investigation in CAST CDL see images & timestamps of images
- study of contamination & end of Sep CCM
- detector back to Bonn, fixed
- weekend: (ref: ./../Talks/CCM_2017_Sep/CCM_2017_Sep.html)
- detector installation before first data taking
- reinstall in October for start of data taking in 30th Oct 2017
- remount start
- Alignment with Geometers (after removal & remounting due to window accident) for data taking. Magnet cold and under vacuum.
- calibration of scintillator veto paddle in RD51 lab
- remount installation finished incl. lead shielding (mail "InGrid status update" to Satan Forum on )
- <data taking period from
- between runs 85 & 86: fix of
src/waitconditions.cpp
TOS bug, which caused scinti triggers to be written in all files up to next FADC trigger - run 101
- Diff: 50 ns -> 20 ns (one to left)
- Coarse gain: 6x -> 10x (one to right)
was the first with FADC noise
significant enough to make me change settings:
- run 109: crazy amounts of noise on FADC
- run 111: stopped early. tried to debug noise and blew a fuse in gas interlock box by connecting NIM crate to wrong power cable
- run 112: change FADC settings again due to noise:
- integration: 50 ns -> 100 ns This was done at around
- integration: 100 ns -> 50 ns again at around .
- run 121: Jochen set the FADC main amplifier integration time from 50 -> 100 ns again, around
to in
2017>
- between runs 85 & 86: fix of
- <data taking period from
- start of 2018 period: temperature sensor broken!
- ./../Mails/cast_power_supply_problem_thlshift/power_supply_problem.html) issue with power supply causing severe drop in gain / increase in THL (unclear, #hits in 55Fe dropped massively ; background eventually only saw random active pixels). Fixed by replugging all power cables and improving the grounding situation. iirc: this was later identified to be an issue with the grounding between the water cooling system and the detector. to issues with moving THL values & weird detector behavior. Changed THL values temporarily as an attempted fix, but in the end didn't help, problem got worse. (ref: gmail "Update 17/02" and
- by everything was fixed and detector was running correctly again.
2 runs:
were missed because of this.
to
beginning 2018>
- removal of veto scintillator and lead shielding
- X-ray finger run 2 on . This run is actually useful to determine the position of the detector.
- Geometer measurement after warming up magnet and not under vacuum. Serves as reference for difference between vacuum & cold on !
- detector fully removed and taken back to Bonn
- installation started . Mounting due to lead shielding support was more complicated than intended (see mails "ingrid installation" including Damien Bedat)
- shielding fixed by and detector installed the next couple of days
- Alignment with Geometers for data taking. Magnet warm and not under vacuum.
- data taking was supposed to start end of September, but delayed.
- detector had issue w/ power supply, finally fixed on . Issue was a bad soldering joint on the Phoenix connector on the intermediate board. Note: See chain of mails titled "Unser Detektor…" starting on for more information. Detector behavior was weird from beginning Oct. Weird behavior seen on the voltages of the detector. Initial worry: power supply dead or supercaps on it. Replaced power supply (Phips brought it a few days after), but no change.
- data taking starts
- run 297, 298 showed lots of noise again, disabled FADC on (went to CERN next day)
- data taking ends
runs that were missed:
The last one was not a full run.
[ ]
CHECK THE ELOG FOR WHAT THE LAST RUN WAS ABOUT
- detector mounted in CAST Detector Lab
- data taking from to .
- detector dismounted and taken back to Bonn
- ref: ./../outerRingNotes.html
- calibration measurements of outer chips with a 55Fe source using a custom anode & window
- between and calibrations of each outer chip using Run 2 and Run 3 detector calibrations
- start of a new detector calibration
- another set of measurements between to with a new set of calibrations
2.6. Detector alignment at CAST [/]
There were 3 different kinds of alignments:
- laser alignment. Done in July 2017 and 27/04/2018 (see mail of
Theodoros for latter "alignment of LLNL telescope")
- images:
- the spot is the one on the vertical line from the center down! The others are just refractions. Was easier visible by eye.
The right one is the alignment as it was after data taking in Apr 2018. The left is after a slight realignment by loosening the screws and moving a bit. Theodoros explanation about it from the mail listed above:
Hello,
After some issues the geometres installed the aligned laser today. Originally Jaime and I saw the spot as seen at the right image. It was +1mm too high. We rechecked Sebastian’s images from the Xray fingers and confirmed that his data indicated a parallel movement of ~1.4 mm (detector towards airport). We then started wondering whether there are effects coming from the target itself or the tolerances in the holes of the screws. By unscrewing it a bit it was clear that one can easily reposition it with an uncertainty of almost +-1mm. For example in the left picture you can see the new position we put it in, in which the spot is almost perfectly aligned.
We believe that the source of these shifts is primarily the positioning of the detector/target on the plexiglass drum. As everything else seems to be aligned, we do not need to realign. On Monday we will lock the manipulator arms and recheck the spot. Jaime will change his tickets to leave earlier.
Thursday-Friday we can dismount the shielding support to send it for machining and the detector can go to Bonn.
With this +-1mm play in the screw holes in mind (and the possible delays from the cavities) we should seriously consider doing an X-ray finger run right after the installation of InGRID which may need to be shifted accordingly. I will try to adjust the schedule next week.
Please let me know if you have any further comments.
Cheers,
Theodoros
- images:
geometer measurements. 4 measurements performed, with EDMS links (the links are fully public!):
- 11.07.2017 https://edms.cern.ch/document/1827959/1
- 14.09.2017 https://edms.cern.ch/document/2005606/1
- 26.10.2017 https://edms.cern.ch/document/2005690/1
- 23.07.2018 https://edms.cern.ch/document/2005895/1
For geometer measurements in particular search gmail archive for Antje Behrens (Antje.Behrens@cern.ch) or "InGrid alignment" The reports can also be found here: ./CAST_Alignment/
- X-ray finger measurements, 2 runs:
[ ]
13.07.2017, run number 21 LINK DATA[ ]
20.04.2018, run number 189, after first part data taking in 2018. LINK DATA
2.7. X-ray finger
The X-ray finger used at CAST is an Amptek COOL-X:
https://www.amptek.com/internal-products/obsolete-products/cool-x-pyroelectric-x-ray-generator
The relevant plots for our purposes are shown in:
In addition the simple Monte Carlo simulation of the expected signal (written in Clojure) is found in: ./../Code/CAST/XrayFinderCalc/
2 X-ray finger runs:
[ ]
13.07.2017, run number 21 LINK DATA[ ]
20.04.2018, run number 189, after first part data taking in 2018. LINK DATA
Important note: The detector was removed directly after the first of these X-ray measurements! As such, the measurement has no bearing on the real position the detector was in during the first data taking campaign.
The X-ray finger run is used both to determine a center position of the detector, as well as determine the rotation of the graphite spacer of the LLNL telescope, i.e. the rotation of the telescope.
[X]
Determine the rotation angle of the graphite spacer from the X-ray finger data -> do now. X-ray finger run: -> -> It comes out to 14.17°! But for run 21 (between which detector was dismounted of course): -> Only 11.36°! That's a huge uncertainty given the detector was only dismounted! 3°.
NOTE: For more information including simulations, for now see here: ./../journal.html from the day of , sec. [BROKEN LINK: sec:journal:2023_09_05_xray_finger].
2.7.1. Run 189
The below is copied from thesis.org
.
I copied the X-ray finger runs from tpc19 over to ./../../CastData/data/XrayFingerRuns/. The run of interest is mainly the run 189, as it's the run done with the detector installed as in 2017/18 data taking.
cd /dev/shm # store here for fast access & temporary cp ~/CastData/data/XrayFingerRuns/XrayFingerRun2018.tar.gz . tar xzf XrayFingerRun2018.tar.gz raw_data_manipulation -p Run_189_180420-09-53 --runType xray --out xray_raw_run189.h5 reconstruction -i xray_raw_run189.h5 --out xray_reco_run189.h5 # make sure `config.toml` for reconstruction uses `default` clustering! reconstruction -i xray_reco_run189.h5 --only_charge reconstruction -i xray_reco_run189.h5 --only_gas_gain reconstruction -i xray_reco_run189.h5 --only_energy_from_e plotData --h5file xray_reco_run189.h5 --runType=rtCalibration -b bGgPlot --ingrid --occupancy --config plotData.toml
which gives us the following plot:
With many more plots here: ./../Figs/statusAndProgress/xrayFingerRun/run189/
One very important plot: -> So the peak is at around 3 keV instead of about 8 keV, as the plot from Amptek in the section above pretends.
[ ]
Maybe at CAST they changed the target?
2.8. Detector window
The window layout is shown in fig. 2.
The sizes are thus:
- Diameter: \(\SI{14}{\mm}\)
- 4 strongbacks of:
- width: \(\SI{0.5}{\mm}\)
- thickness: \(\SI{200}{\micro\meter}\)
- \(\SI{20}{\nm}\) Al coating
- they get wider towards the very outside
Let's compute the amount of occlusion by the strongbacks. Using code based on Johanna's raytracer:
## Super dumb MC sampling over the entrance window using the Johanna's code from `raytracer2018.nim` ## to check the coverage of the strongback of the 2018 window import ggplotnim, random, chroma proc colorMe(y: float): bool = const stripDistWindow = 2.3 #mm stripWidthWindow = 0.5 #mm if abs(y) > stripDistWindow / 2.0 and abs(y) < stripDistWindow / 2.0 + stripWidthWindow or abs(y) > 1.5 * stripDistWindow + stripWidthWindow and abs(y) < 1.5 * stripDistWindow + 2.0 * stripWidthWindow: result = true else: result = false proc sample() = randomize(423) const nmc = 100_000 let black = color(0.0, 0.0, 0.0) var dataX = newSeqOfCap[float](nmc) var dataY = newSeqOfCap[float](nmc) var inside = newSeqOfCap[bool](nmc) for idx in 0 ..< nmc: let x = rand(-7.0 .. 7.0) let y = rand(-7.0 .. 7.0) if x*x + y*y < 7.0 * 7.0: dataX.add x dataY.add y inside.add colorMe(y) let df = toDf(dataX, dataY, inside) echo "A fraction of ", df.filter(f{`inside` == true}).len / df.len, " is occluded by the strongback" let dfGold = df.filter(f{abs(idx(`dataX`, float)) <= 2.25 and abs(idx(`dataY`, float)) <= 2.25}) echo "Gold region: A fraction of ", dfGold.filter(f{`inside` == true}).len / dfGold.len, " is occluded by the strongback" ggplot(df, aes("dataX", "dataY", fill = "inside")) + geom_point() + # draw the gold region as a black rectangle geom_linerange(aes = aes(y = 0, x = 2.25, yMin = -2.25, yMax = 2.25), color = some(black)) + geom_linerange(aes = aes(y = 0, x = -2.25, yMin = -2.25, yMax = 2.25), color = some(black)) + geom_linerange(aes = aes(x = 0, y = 2.25, xMin = -2.25, xMax = 2.25), color = some(black)) + geom_linerange(aes = aes(x = 0, y = -2.25, xMin = -2.25, xMax = 2.25), color = some(black)) + xlab("x [mm]") + ylab("y [mm]") + ggsave("/home/basti/org/Figs/statusAndProgress/detector/SiN_window_occlusion.png", width = 1150, height = 1000) sample()
A fraction of 0.16170429252782195 is occluded by the strongback Gold region: A fraction of 0.2215316951907448 is occluded by the strongback (exact should be 22.2 % based on two \SI{0.5}{\mm} strongbacks within a square of \SI{4.5}{\mm} long sides).
So to summarize it in a table, tab 1 and as a figure in fig. 3.
Region | Occlusion / % |
---|---|
Full | 16.2 |
Gold | 22.2 |
The X-ray absorption properties were obtained using the online calculator from here: https://henke.lbl.gov/optical_constants/
The relevant resource files are found in:
- 200μm Si strongback: ./../resources/Si_density_2.33_thickness_200microns.txt
- 300nm SiN: ./../resources/Si3N4_density_3.44_thickness_0.3microns.txt
- 20nm Al: ./../resources/Al_20nm_transmission_10keV.txt
- 3cm Ar: ./../resources/transmission-argon-30mm-1050mbar-295K.dat
Let's create a plot of:
- window transmission
- gas absorption
- convolution of both
import ggplotnim let al = readCsv("/home/basti/org/resources/Al_20nm_transmission_10keV.txt", sep = ' ', header = "#") let siN = readCsv("/home/basti/org/resources/Si3N4_density_3.44_thickness_0.3microns.txt", sep = ' ') let si = readCsv("/home/basti/org/resources/Si_density_2.33_thickness_200microns.txt", sep = ' ') let argon = readCsv("/home/basti/org/resources/transmission-argon-30mm-1050mbar-295K.dat", sep = ' ') var df = newDataFrame() df["300nm SiN"] = siN["Transmission", float] df["200μm Si"] = si["Transmission", float] df["30mm Ar"] = argon["Transmission", float][0 .. argon.high - 1] df["20nm Al"] = al["Transmission", float] df["Energy [eV]"] = siN["PhotonEnergy(eV)", float] df = df.mutate(f{"Energy [keV]" ~ idx("Energy [eV]") / 1000.0}, f{"30mm Ar Abs." ~ 1.0 - idx("30mm Ar")}, f{"Efficiency" ~ idx("30mm Ar Abs.") * idx("300nm SiN") * idx("20nm Al")}, f{"Eff • SB • ε" ~ `Efficiency` * 0.78 * 0.8}) # strongback occlusion of 22% and ε = 80% .drop(["Energy [eV]", "Ar"]) .gather(["300nm SiN", "Efficiency", "Eff • SB • ε", "30mm Ar Abs.", "200μm Si", "20nm Al"], key = "Type", value = "Efficiency") echo df ggplot(df, aes("Energy [keV]", "Efficiency", color = "Type")) + geom_line() + ggtitle("Detector efficiency of combination of 300nm SiN window and 30mm of Argon absorption, including ε = 80% and strongback occlusion of 22%") + margin(top = 1.5) + ggsave("/home/basti/org/Figs/statusAndProgress/detector/window_plus_argon_efficiency.pdf", width = 800, height = 600)
Fig. 4 shows the combined efficiency of the SiN window, the \SI{20}{\nm} of Al coating and the gas \SI{30}{\mm} of Argon absorption and in addition the software efficiency (at ε = 80%) and strongback occlusion (22% in gold region).
The following code exists to plot the window transmissions for the window material in combination with the axion flux in:
It produces the combined plot as shown in fig. 5.
2.8.1. Window layout with correct window rotation
## Super dumb MC sampling over the entrance window using the Johanna's code from `raytracer2018.nim` ## to check the coverage of the strongback of the 2018 window import ggplotnim, chroma, unchained proc hitsStrongback(y: float): bool = const stripDistWindow = 2.3 #mm stripWidthWindow = 0.5 #mm if abs(y) > stripDistWindow / 2.0 and abs(y) < stripDistWindow / 2.0 + stripWidthWindow or abs(y) > 1.5 * stripDistWindow + stripWidthWindow and abs(y) < 1.5 * stripDistWindow + 2.0 * stripWidthWindow: result = true else: result = false proc sample() = let black = color(0.0, 0.0, 0.0) let nPoints = 256 var xs = linspace(-7.0, 7.0, nPoints) var dataX = newSeqOfCap[float](nPoints^2) var dataY = newSeqOfCap[float](nPoints^2) var inside = newSeqOfCap[bool](nPoints^2) for x in xs: for y in xs: if x*x + y*y < 7.0 * 7.0: when false: dataX.add x * cos(30.°.to(Radian)) + y * sin(30.°.to(Radian)) dataY.add y * cos(30.°.to(Radian)) - x * sin(30.°.to(Radian)) inside.add hitsStrongback(y) else: dataX.add x dataY.add y # rotate current y back, such that we can analyze in a "non rotated" coord. syst let yRot = y * cos(-30.°.to(Radian)) - x * sin(-30.°.to(Radian)) inside.add hitsStrongback(yRot) let df = toDf(dataX, dataY, inside) ggplot(df, aes("dataX", "dataY", fill = "inside")) + geom_point() + # draw the gold region as a black rectangle geom_linerange(aes = aes(y = 0, x = 2.25, yMin = -2.25, yMax = 2.25), color = some(black)) + geom_linerange(aes = aes(y = 0, x = -2.25, yMin = -2.25, yMax = 2.25), color = some(black)) + geom_linerange(aes = aes(x = 0, y = 2.25, xMin = -2.25, xMax = 2.25), color = some(black)) + geom_linerange(aes = aes(x = 0, y = -2.25, xMin = -2.25, xMax = 2.25), color = some(black)) + xlab("x [mm]") + ylab("y [mm]") + xlim(-7, 7) + ylim(-7, 7) + ggsave("/home/basti/org/Figs/statusAndProgress/detector/SiN_window_occlusion_rotated.png", width = 1150, height = 1000) sample()
Which gives us:
2.9. General event & outer chip information
Running ./../../CastData/ExternCode/TimepixAnalysis/Tools/outerChipActivity/outerChipActivity.nim we can extract information about the total number of events and the activity on the center chip vs. the outer chips.
For both the 2017/18 data (run 2) and the end of 2018 data (run 3) we will now look at:
- number of total events
- number of events with any activity (> 3 hits)
- number of events with activity only on center chip
- number of events with activity on center and outer chips (but not only center)
- number of events with activity only on outer chips
UPDATE:
The reason for the two peaks in the Run 2 data of the event duration histogram is that we accidentally used run settings 2/32 in 2017 and 2/30 in 2018! (This does not explain the 0 time events of course)2.9.1. 2017/18 (Run 2)
Number of total events: 3758960 Number of events without center: 1557934 | 41.44587864728542% Number of events only center: 23820 | 0.633685913124907% Number of events with center activity and outer: 984319 | 26.185939728009878% Number of events any hit events: 2542253 | 67.6318183752953% Mean of event durations: 2.144074329358038
Interestingly, the histogram of event durations looks as follows, fig. 6.
We can cut to the range between 0 and 2.2 s, fig. 7.
The peak at 0 is plain and simply a peak at exact 0 values (the previous figure only removed exact 0 values).
What does the energy distribution look like for these events? Fig. 8.
And the same split up per run (to make sure it's not one bad run), fig. 9.
Hmm. I suppose it's a bug in the firmware that the event duration is not correctly returned? Could happen if FADC triggers and for some reason 0 clock cycles are returned. This could be connected to the weird "hiccups" the readout sometimes does (when the FADC doesn't actually trigger for a full event). Maybe these are the events right after?
- Noisy pixels
In this run there are a few noisy pixels that need to be removed before background rates are calculated. These are listed in tab. 2.
Table 2: Number of counts noisy pixels in 2017/18 dataset contribute to the number of background clusters remaining. The total number of noise clusters amounts to 1265 in this case (depends on the clustering algorithm potentially). These must be removed for a sane background level (and the area must be removed from from the size of active area in this dataset. NOTE: When using these numbers, make sure the x and y coordinates are not accidentally inverted. x y Count after logL 64 109 7 64 110 9 65 108 30 66 108 50 67 108 33 65 109 74 66 109 262 67 109 136 68 109 29 65 110 90 66 110 280 67 110 139 65 111 24 66 111 60 67 111 34 67 112 8 \clearpage
2.9.2. End of 2018 (Run 3)
NOTE:
In Run 3 we only used 2/30 as run settings! Hence a single peak in event duration.And the same plots and numbers for 2018.
Number of total events: 1837330 Number of events without center: 741199 | 40.34109278137297% Number of events only center: 9462 | 0.514986420512374% Number of events with center activity and outer: 470188 | 25.590830172043127% Number of events any hit events: 1211387 | 65.9319229534161% Mean of event durations: 2.1157526632342307
2.10. CAST maximum angle from the sun
A question that came up today. What is the maximum difference in grazing angle that we could see on the LLNL telescope behind CAST for an axion coming from the Sun?
The Sun has an apparent size of ~32 arcminutes https://en.wikipedia.org/wiki/Sun.
If the dominant axion emission comes from the inner 10% of the radius, that's still 3 arcminutes, which is \(\SI{0.05}{°}\).
The first question is whether the magnet bore appears larger or smaller than this size from one end to the other:
import unchained, math const L = 9.26.m # Magnet length const d = 4.3.cm # Magnet bore echo "Maximum angle visible through bore = ", arctan(d / L).Radian.to(°)
so \SI{0.266}{°}, which is larger than the apparent size of the solar core.
That means the maximum angle we can see at a specific point on the telescope is up to the apparent size of the core, namely \(\SI{0.05}{°}\).
2.11. LLNL telescope
IMPORTANT: The multilayer coatings of the LLNL telescope are carbon at the top and platinum at the bottom, despite "Pt/C" being used to refer to them. See fig. 4.11 in the PhD thesis .
UPDATE: 2.11.2.
I randomly stumbled on a PhD thesis about the NuSTAR telescope! It validates some things I have been wondering about. See sec.UPDATE:
Jaime sent me two text files today:- ./../resources/LLNL_telescope/cast20l4_f1500mm_asDesigned.txt
- ./../resources/LLNL_telescope/cast20l4_f1500mm_asBuilt.txt
both of which are quite different from the numbers in Anders Jakobsen's thesis! These do reproduce a focal length of \(\SI{1500}{mm}\) instead of \(\SI{1530}{mm}\) when calculating it using the Wolter equation (when not using \(R_3\), but rather the virtual reflection point!).
This section covers details about the telescope design, i.e. the mirror angles, radii and all that stuff as well as information about it from external sources (e.g. the raytracing results from LLNL about it). For more information about our raytracing results, see sec. 11.
Further, for more information about the telescope see ./LLNL_def_REST_format/llnl_def_rest_format.html.
Some of the most important information is repeated here.
The information for the LLNL telescope can best be found in the PhD thesis of Anders Clemen Jakobsen from DTU in Denmark: https://backend.orbit.dtu.dk/ws/portalfiles/portal/122353510/phdthesis_for_DTU_orbit.pdf
in particular page 58 (59 in the PDF) for the following table: UPDATE:
The numbers in this table are wrong. See update at the top of this section.Layer | Area [mm²] | Relative area [%] | Cumulative area [mm²] | α [°] | α [mrad] | R1 [mm] | R5 [mm] |
---|---|---|---|---|---|---|---|
1 | 13.863 | 0.9546 | 13.863 | 0.579 | 10.113 | 63.006 | 53.821 |
2 | 48.175 | 3.3173 | 62.038 | 0.603 | 10.530 | 65.606 | 56.043 |
3 | 69.270 | 4.7700 | 131.308 | 0.628 | 10.962 | 68.305 | 58.348 |
4 | 86.760 | 5.9743 | 218.068 | 0.654 | 11.411 | 71.105 | 60.741 |
5 | 102.266 | 7.0421 | 320.334 | 0.680 | 11.877 | 74.011 | 63.223 |
6 | 116.172 | 7.9997 | 436.506 | 0.708 | 12.360 | 77.027 | 65.800 |
7 | 128.419 | 8.8430 | 564.925 | 0.737 | 12.861 | 80.157 | 68.474 |
8 | 138.664 | 9.5485 | 703.589 | 0.767 | 13.382 | 83.405 | 71.249 |
9 | 146.281 | 10.073 | 849.87 | 0.798 | 13.921 | 86.775 | 74.129 |
10 | 150.267 | 10.347 | 1000.137 | 0.830 | 14.481 | 90.272 | 77.117 |
11 | 149.002 | 10.260 | 1149.139 | 0.863 | 15.062 | 93.902 | 80.218 |
12 | 139.621 | 9.6144 | 1288.76 | 0.898 | 15.665 | 97.668 | 83.436 |
13 | 115.793 | 7.973 | 1404.553 | 0.933 | 16.290 | 101.576 | 86.776 |
14 | 47.648 | 3.2810 | 1452.201 | 0.970 | 16.938 | 105.632 | 90.241 |
Further information can be found in the JCAP paper about the LLNL telescope for CAST: https://iopscience.iop.org/article/10.1088/1475-7516/2015/12/008/meta
in particular table 1 (extracted with caption):
Property | Value |
---|---|
Mirror substrates | glass, Schott D263 |
Substrate thickness | 0.21 mm |
L, length of upper and lower mirrors | 225 mm |
Overall telescope length | 454 mm |
f , focal length | 1500 mm |
Layers | 13 |
Total number of individual mirrors in optic | 26 |
ρmax , range of maximum radii | 63.24–102.4 mm |
ρmid , range of mid-point radii | 62.07–100.5 mm |
ρmin , range of minimum radii | 53.85–87.18 mm |
α, range of graze angles | 0.592–0.968 degrees |
Azimuthal extent | Approximately 30 degrees |
2.11.1. Information (raytracing, effective area etc) from CAST Nature paper
Jaime finally sent the information about the raytracing results from the LLNL telescope to Cristina https://unizares-my.sharepoint.com/personal/cmargalejo_unizar_es/_layouts/15/onedrive.aspx?ga=1&id=%2Fpersonal%2Fcmargalejo%5Funizar%5Fes%2FDocuments%2FDoctorado%20UNIZAR%2FCAST%20official%2FLimit%20calculation%2FJaime%27s%20data
. She shared it with me:I downloaded and extracted the files to here: ./../resources/llnl_cast_nature_jaime_data/
Things to note:
- the CAST2016Dec* directories contain
.fits
files for the axion image for different energies - the same directories also contain text files for the effective area!
- the
./../resources/llnl_cast_nature_jaime_data/2016_DEC_Final_CAST_XRT/
directory contains the axion images actually used for the limit - I
presume - in form of
.txt
files. that directory also contains a "final" ? effective area file! excerpt from that file: UPDATE: In the meeting with Jaime and Julia on
Jaime mentioned this is the final effective area that they calculated and we should use this!E(keV) Area(cm^2) Area_lower_limit(cm^2) Area_higher_limit(cm^2) 0.000000 9.40788 8.93055 9.87147 0.100000 2.51070 1.76999 3.56970 0.200000 5.96852 5.06843 6.93198 0.300000 4.05163 3.55871 4.60069 0.400000 5.28723 4.70362 5.92018 0.500000 6.05037 5.50801 6.63493 0.600000 5.98980 5.44433 6.56380 0.700000 6.33760 5.81250 6.86565 0.800000 6.45533 5.97988 6.94818 0.900000 6.68399 6.22210 7.15994 1.00000 6.87400 6.42313 7.32568 1.10000 7.01362 6.57078 7.44991 1.20000 7.11297 6.68403 7.53477 1.30000 7.18784 6.76026 7.60188 1.40000 7.23464 6.82698 7.65152 1.50000 7.26598 6.85565 7.66851 1.60000 7.28027 6.86977 7.67453 1.70000 7.26311 6.86645 7.66171 1.80000 7.22509 6.83192 7.61740 1.90000 7.14513 6.76611 7.52503 2.00000 6.96418 6.58820 7.32984 2.10000 5.28441 5.00942 5.55890 2.20000 3.64293 3.45370 3.82893 2.30000 5.17823 4.90664 5.44582 2.40000 5.29972 5.02560 5.57611 2.50000 5.29166 5.02555 5.57095 2.60000 5.17942 4.91425 5.43329 2.70000 4.92675 4.67978 5.18098 2.80000 4.92422 4.66858 5.17432 2.90000 4.83265 4.58795 5.08459 3.00000 4.64834 4.41387 4.89098
i.e. it peaks at ~7.3.
Plot the "final" effective area against the extracted data from the JCAP paper:
Note that we do not know with certainty that this is indeed the effective area used for the CAST Nature limit. That's just my assumption!
import ggplotnim const path = "/home/basti/org/resources/llnl_cast_nature_jaime_data/2016_DEC_Final_CAST_XRT/EffectiveArea.txt" const pathJCAP = "/home/basti/org/resources/llnl_xray_telescope_cast_effective_area.csv" let dfJcap = readCsv(pathJCAP) let df = readCsv(path, sep = ' ') .rename(f{"Energy[keV]" <- "E(keV)"}, f{"EffectiveArea[cm²]" <- "Area(cm^2)"}) .select("Energy[keV]", "EffectiveArea[cm²]") let dfC = bind_rows([("JCAP", dfJcap), ("Nature", df)], "Type") ggplot(dfC, aes("Energy[keV]", "EffectiveArea[cm²]", color = "Type")) + geom_line() + ggsave("/tmp/effective_area_jcap_vs_nature_llnl.pdf")
So it seems like the effective area here is even lower than the
effective area in the JCAP LLNL paper! That's ridiculous.
HOWEVER the shape seems to match much better with the shape we get
from computing the effective area ourselves!
-> UPDATE: No, not really. I ran the code in journal.org
with
makePlot
and makeRescaledPlot
using dfJaimeNature
as a rescaling
reference using the 3 arcmin code.
So the shape is very different after all.
[ ]
Is there a chance the difference is due toxrayAttenuation
? Note the weird energy dependent linear offset comparingxrayAttenuation
reflectivity compared to the DarpanX numbers! Could that shift be the reason?
- LLNL raytracing for axion image and CoolX X-ray finger
The DTU thesis contains raytracing images (from page 78) for the X-ray finger run and for the axion image.
- X-ray finger
The image (as a screenshot) from the X-ray finger:
where we can see a few things:
- the caption mentions the source was 14.2 m away from the optic. This is nonsensical. The magnet is 9.26m long and even with the cryo housing etc. we won't get to much more than 10 m from the telescope. The X-ray finger was installed in the bore of the magnet!
- it mentions the source being 6 mm diameter (text mentions diameter explicitly). All we know about it is from the manufacturer that the size is given as 15 mm. But there is nothing about the actual size of the emission surface.
- the resulting raytraced image has a size of only slightly less than 3 mm in the short axis and maybe about 3 mm in the long axis.
About 3: Our own X-ray finger is the following: file:///home/basti/phd/Figs/CAST_Alignment/xray_finger_centers_run_189.pdf (Note: it needs to be rotated of course) We can see that our real image is much larger! Along "x" it goes from about 5.5 to 10 mm or so! Quite a bit larger. And along y from less than 4 to maybe 10!
Given that we have the raytracing data from Jaime, let's plot their data to see if it actually looks like that:
import ggplotnim, sequtils, seqmath let df = readCsv("/home/basti/org/resources/llnl_cast_nature_jaime_data/2016_DEC_Final_CAST_XRT/3.00keV_2Dmap_CoolX.txt", sep = ' ', skipLines = 2, colNames = @["x", "y", "z"]) .mutate(f{"x" ~ `x` - mean(`x`)}, f{"y" ~ `y` - mean(`y`)}) var customInferno = inferno() customInferno.colors[0] = 0 # transparent ggplot(df, aes("x", "y", fill = "z")) + geom_raster() + scale_fill_gradient(customInferno) + xlab("x [mm]") + ylab("y [mm]") + ggtitle("LLNL raytracing of X-ray finger (Jaime)") + ggsave("~/org/Figs/statusAndProgress/rayTracing/raytracing_xray_finger_llnl_jaime.pdf") ggplot(df.filter(f{`x` >= -7.0 and `x` <= 7.0 and `y` >= -7.0 and `y` <= 7.0}), aes("x", "y", fill = "z")) + geom_raster() + scale_fill_gradient(customInferno) + xlab("x [mm]") + ylab("y [mm]") + xlim(-7.0, 7.0) + ylim(-7.0, 7.0) + ggtitle("LLNL raytracing of X-ray finger zoomed (Jaime)") + ggsave("~/org/Figs/statusAndProgress/rayTracing/raytracing_xray_finger_llnl_jaime_gridpix_size.pdf")
This yields the following figure: and cropped to the range of the GridPix:
This is MUCH bigger than the plot from the paper indicates. And the shape is also much more elongated! More in line with what we really see.
Let's use our raytracer to produce the X-ray finger according to the specification of 14.2 m first and then a more reasonable estimate.
Make sure to put the following into the
config.toml
file:[TestXraySource] useConfig = true # sets whether to read these values here. Can be overriden here or useng flag `--testXray` active = true # whether the source is active (i.e. Sun or source?) sourceKind = "classical" # whether a "classical" source or the "sun" (Sun only for position *not* for energy) parallel = false energy = 3.0 # keV The energy of the X-ray source distance = 14200 # 9260.0 #106820.0 #926000 #14200 #9260.0 #2000.0 # mm Distance of the X-ray source from the readout radius = 3.0 #21.5 #44.661 #8.29729 #46.609 #4.04043 #3.0 #4.04043 #21.5 # #21.5 # mm Radius of the X-ray source offAxisUp = 0.0 # mm offAxisLeft = 0.0 # mm activity = 0.125 # GBq The activity in `GBq` of the source lengthCol = 0.0 #0.021 # mm Length of a collimator in front of the source
./raytracer --ignoreDetWindow --ignoreGasAbs --suffix "_xrayFinger_14.2m_3mm"
which more or less matches the size of our real data.
Now the same with a source that is 10 m away:
[TestXraySource] useConfig = true # sets whether to read these values here. Can be overriden here or useng flag `--testXray` active = true # whether the source is active (i.e. Sun or source?) sourceKind = "classical" # whether a "classical" source or the "sun" (Sun only for position *not* for energy) parallel = false energy = 3.0 # keV The energy of the X-ray source distance = 10000 # 9260.0 #106820.0 #926000 #14200 #9260.0 #2000.0 # mm Distance of the X-ray source from the readout radius = 3.0 #21.5 #44.661 #8.29729 #46.609 #4.04043 #3.0 #4.04043 #21.5 # #21.5 # mm Radius of the X-ray source offAxisUp = 0.0 # mm offAxisLeft = 0.0 # mm activity = 0.125 # GBq The activity in `GBq` of the source lengthCol = 0.0 #0.021 # mm Length of a collimator in front of the source
./raytracer --ignoreDetWindow --ignoreGasAbs --suffix "_xrayFinger_10m_3mm"
which is quite a bit bigger than our real data. Maybe we allow some angles that we shouldn't, i.e. the X-ray finger has a collimator? Or our reflectivities are too good for too large angles?
Without good knowledge of the real size of the X-ray finger emission this is hard to get right.
- Axion image
The axion image as mentioned in the PhD thesis is the following:
First of all let's note that the caption talks about emission of a 3 arcminute source. Let's check the apparent size of the sun and the typical emission, which is from the inner 30%:
import unchained, math let Rsun = 696_342.km # SOHO mission 2003 & 2006 # use the tangent to compute based on radius of sun: # tan α = Rsun / 1.AU echo "Apparent size of the sun = ", arctan(Rsun / 1.AU).Radian.to(ArcMinute) echo "Typical emission sun from inner 30% = ", arctan(Rsun * 0.3 / 1.AU).Radian.to(ArcMinute) let R3arc = (tan(3.ArcMinute.to(Radian)) * 1.AU).to(km) echo "Used radius for 3' = ", R3arc echo "As fraction of solar radius = ", R3arc / RSun
So 3' correspond to about 18.7% of the radius. All in all that seems reasonable at least.
Let's plot the axion image as we have it from Jaime's data:
import ggplotnim, seqmath import std / [os, sequtils, strutils] proc readRT(p: string): DataFrame = result = readCsv(p, sep = ' ', skipLines = 4, colNames = @["x", "y", "z"]) result["File"] = p proc meanData(df: DataFrame): DataFrame = result = df.mutate(f{"x" ~ `x` - mean(col("x"))}, f{"y" ~ `y` - mean(col("y"))}) proc plots(df: DataFrame, title, outfile: string) = var customInferno = inferno() customInferno.colors[0] = 0 # transparent ggplot(df, aes("x", "y", fill = "z")) + geom_raster() + scale_fill_gradient(customInferno) + xlab("x [mm]") + ylab("y [mm]") + ggtitle(title) + ggsave(outfile) ggplot(df.filter(f{`x` >= -7.0 and `x` <= 7.0 and `y` >= -7.0 and `y` <= 7.0}), aes("x", "y", fill = "z")) + geom_raster() + scale_fill_gradient(customInferno) + xlab("x [mm]") + ylab("y [mm]") + xlim(-7.0, 7.0) + ylim(-7.0, 7.0) + ggtitle(title & " (zoomed)") + ggsave(outfile.replace(".pdf", "_gridpix_size.pdf")) block Single: let df = readRT("/home/basti/org/resources/llnl_cast_nature_jaime_data/2016_DEC_Final_CAST_XRT/3.00keV_2Dmap.txt") .meanData() df.plots("LLNL raytracing of axion image @ 3 keV (Jaime)", "~/org/Figs/statusAndProgress/rayTracing/raytracing_axion_image_llnl_jaime_3keV.pdf") block All: var dfs = newSeq[DataFrame]() for f in walkFiles("/home/basti/org/resources/llnl_cast_nature_jaime_data/2016_DEC_Final_CAST_XRT/*2Dmap.txt"): echo "Reading: ", f dfs.add readRT(f) echo "Summarize" var df = dfs.assignStack() df = df.group_by(@["x", "y"]) .summarize(f{float: "z" << sum(`z`)}, f{float: "zMean" << mean(`z`)}) df.writeCsv("/tmp/llnl_raytracing_jaime_all_energies_raw_sum.csv") df = df.meanData() df.writeCsv("/tmp/llnl_raytracing_jaime_all_energies.csv") plots(df, "LLNL raytracing of axion image (sum all energies) (Jaime)", "~/org/Figs/statusAndProgress/rayTracing/raytracing_axion_image_llnl_jaime_all_energies.pdf")
The 3 keV data for the axion image: and cropped again:
And the sum of all energies: and cropped again:
Both clearly show the symmetric shape that is so weird but also - again - does NOT reproduce the raytracing seen in the screenshot above! That one clearly has a very stark tiny center with the majority of the flux, which is gone and replaced by a much wider region of significant flux!
Both are in strong contrast to our own axion image. Let's compute that using the Primakoff only (make sure to disable the X-ray test source in the config file!):
./raytracer --ignoreDetWindow --ignoreGasAbs --suffix "_axionImagePrimakoff_focal_point"
and for a more realistic image at the expected conversion point:
[DetectorInstallation] useConfig = true # sets whether to read these values here. Can be overriden here or using flag `--detectorInstall` # Note: 1500mm is LLNL focal length. That corresponds to center of the chamber! distanceDetectorXRT = 1487.93 # mm distanceWindowFocalPlane = 0.0 # mm lateralShift = 0.0 # mm lateral ofset of the detector in repect to the beamline transversalShift = 0.0 # mm transversal ofset of the detector in repect to the beamline #0.0.mm #
./raytracer --ignoreDetWindow --ignoreGasAbs --suffix "_axionImagePrimakoff_conversion_point"
which yields:
which is not that far off in size of the LLNL raytraced image. The shape is just quite different!
- X-ray finger
- Reply to Igor about LLNL telescope raytracing
Igor wrote me the following mail:
Hi Sebastian, Now that we are checking with Cristina the shape of the signal after the LLNL telescope for the SRMM analysis, I got two questions on your analysis:
- The signal spot shape that you present is different from the one we have for the Nature physics paper. Do you understand why? There was a change in the Ingrid setup wrt the SRMM setup that explains it, maybe?
- Do you have a spot calibration data that allows to crosscheck the position (and rotation) of the signal spot in the Ingrid chip coordinates?
Best, Igor
as a reply to my "Limit method for 7-GridPix @ CAST" mail on
.I ended up writing a lengthy reply.
The reply is also found here: ./../Mails/igorReplyLLNL/igor_reply_llnl_axion_image.html
- My reply
Hey,
sorry for the late reply. I didn't want to reply with one sentence for each question. While looking into the questions in more details more things came up.
One thing - embarrassingly - is that I completely forgot to apply the rotation of my detector in the limit calculation (in our case the detector is rotated by 90° compared to the "data" x-y plane). Added to that is the slight rotation of the LLNL axis, which I also need to include (here I simply forgot that we never added it to the raytracer. Given that the spacer is not visible in the axion image, it didn't occur to me).
Let's start with your second question
Do you have a spot calibration data that allows to crosscheck the position (and rotation) of the signal spot in the Ingrid chip coordinates?
Yes, we have two X-ray finger runs. Unfortunately, one of them is not useful, as it was taken in July 2017 after our detector had to be removed again to make space for a short KWISP data taking. We have a second one from April 2018, which is partially useful. However, the detector was again dismounted between April and October 2018 and we don't have an X-ray finger run for the last data taking between Oct 2018 to Dec 2018.
Fig. 14 shows the X-ray finger run shows the latter X-ray finger run. The two parallel lines with few clusters are two of the window strongbacks. The other line is the graphite spacer of the telescope. The center positions of the clusters are at
- (x, y) = (7.43, 6.59)
(the chip center is at (7, 7). This is what makes up the basis of our position systematic uncertainty of 5%. The 5% correspond to 0.05*7 mm = 0.35 mm.
I decided not to move the actual center of the solar axion image because the X-ray finger data is hard to interpret for three different reasons:
- The entire CAST setup is "modified" in between normal data takings and installation of the X-ray finger. Who knows the effect warming up the magnet etc. is on the spot position?
- determining the actual center position of the axion spot based on the X-ray finger cluster centers is problematic due to the fact that the LLNL telescope is only a portion of a full telescope. With the resulting shape of the X-ray finger signal, combined with the missing data due to the window strongback and graphite spacer and relatively low statistics in the first place, makes trusting the numbers problematic.
- as I said before, we don't even have an X-ray finger run for the last part of the data taking. While we have the geometer measurements from the targets, I don't have the patience to learn about the coordinate system they use and attempt to reconstruct the possible movement based on those measured coordinates.
Given that we take into account the possible movement in the systematics, I believe this is acceptable.
The signal spot shape that you present is different from the one we have for the Nature physics paper. Do you understand why? There was a change in the Ingrid setup wrt the SRMM setup that explains it, maybe?
Here we now come to the actual part that is frustrating for me, too. Unfortunately, due to the "black box" nature of the LLNL telescope, Johanna and me never fully understood this. We don't understand how the raytracing calculations done by Michael Pivovaroff can ever produce a symmetric image given that the LLNL telescope is a) not a perfect Wolter design, but has cone shaped mirrors, b) is only a small portion of a full telescope and c) the incoming X-rays are not perfectly parallel. Intuitively I don't expect to have a symmetric image there. And our raytracing result does not produce anything like that.
A couple of years ago Johanna tried to find out more information about the LLNL raytracing results, but back then when Julia and Jaime were still at LLNL, the answer was effectively a "it's a secret, we can't provide more information".
As such all I can do is try to reproduce the results as well as possible. If they don't agree all I can do is provide explanations about what we compute and give other people access to my data, code and results. Then at least we can all hopefully figure out if there's something wrong with our approach.
Fig. 15 is the raytracing result as it is presented on page 78 of the PhD thesis of A. Jakobsen. It mentions that the Sun is considered as a 3' source, implying the inner ~18% of the Sun are contributing to axion emission.
If I compute this with our own raytracer for the focal spot, I get the plot shown in fig. \ref{fig:axion_image_primakoff_focal_spot}. Fig. \ref{fig:axion_image_primakoff_median_conv} then corresponds to the point that sees the median of all conversions in the gas based on X-ray absorption in the gas. This is now for the case of a pure Primakoff emission and not for dominant axion-electron coupling, as I showed in my presentation (this changes the dominant contributions by radius slightly, see fig. \ref{fig:radial_production_primakoff} { Primakoff } and fig. \ref{fig:radial_production_axion_electron} { axion-electron }). They look very similar, but there are slight changes between the two axion images.
This is one of the big reasons I want to have my own raytracing simulation. Different emission models result in different axion images!
\begin{figure}[htbp] \centering \begin{subfigure}{0.5\linewidth} \centering \includegraphics[width=0.95\textwidth]{/home/basti/org/Figs/statusAndProgress/rayTracing/raytracing_axion_image_primakoff_focal_point.pdf} \caption{Focal spot} \label{fig:axion_image_primakoff_focal_spot} \end{subfigure}% \begin{subfigure}{0.5\linewidth} \centering \includegraphics[width=0.95\textwidth]{/home/basti/org/Figs/statusAndProgress/rayTracing/raytracing_axion_image_primakoff_conversion_point.pdf} \caption{Median conversion point} \label{fig:axion_image_primakoff_median_conv} \end{subfigure} \label{fig:axion_image} \caption{\subref{fig:axion_image_primakoff_focal_spot} Axion image for Primakoff emission from the Sun, computed for the exact LLNL focal spot. (Ignore the title) \subref{fig:axion_image_primakoff_median_conv} Axion image for the median conversion point of the X-rays actually entering the detector. } \end{figure} \begin{figure}[htbp] \centering \begin{subfigure}{0.5\linewidth} \centering \includegraphics[width=0.95\textwidth]{~/org/Figs/statusAndProgress/axionProduction/sampled_radii_primakoff.pdf} \caption{Primakoff radii} \label{fig:radial_production_primakoff} \end{subfigure}% \begin{subfigure}{0.5\linewidth} \centering \includegraphics[width=0.95\textwidth]{~/org/Figs/statusAndProgress/axionProduction/sampled_radii_axion_electron.pdf} \caption{Axion-electron radii} \label{fig:radial_production_axion_electron} \end{subfigure} \label{fig:radial_production} \caption{\subref{fig:radial_production_primakoff} Radial production in the Sun for Primakoff emission. \subref{fig:radial_production_axion_electron} Radial production for axion-electron emission. } \end{figure}Note that this currently does not yet take into account the slight rotation of the telescope. I first need to extract the rotation angle from the X-ray finger run.
Fig. 16 is the sum of all energies of the raytracing results that Jaime finally sent to Cristina a couple of weeks ago. In this case cropped to the size of our detector, placed at the center. These should be - as far as I understand - the ones that the contours used in the Nature paper are based on. However, these clearly do not match the results shown in the PhD thesis of Jakobsen. The extremely small focus area in black is gone and replaced by a much more diffuse area. But again, it is very symmetric, which I don't understand.
And while I was looking into this I also thought I should try to (attempt to) reproduce the X-ray finger raytracing result. Here came another confusion, because the raytracing results for that shown in the PhD thesis, fig. 17, mention that the X-ray finger was placed \SI{14.2}{m} away from the optic with a diameter of \SI{6}{mm}. That seems very wrong, given that the magnet bore is only \SI{9.26}{m} long. In total the entire magnet is - what - maybe \SI{10}{m}? At most it's maybe \SI{11}{m} to the telescope when the X-ray finger is installed in the bore? Unfortunately, the website about the X-ray finger from Amptek is not very helpful either:
https://www.amptek.com/internal-products/obsolete-products/cool-x-pyroelectric-x-ray-generator
as the only thing it says about the size is:
Miniature size: 0.6 in dia x 0.4 in (15 mm dia x 10 mm)
Nothing about the actual size of the area that emits X-rays. Neither do I know anything about a possible collimator used.
Furthermore, the spot size seen here is only about \(\sim 2.5·\SI{3}{mm²}\) or so. Comparing it to the spot size seen with our detector it's closer to \(\sim 5·\SI{5}{mm²}\) or even a bit larger!
So I decided to run a raytracing following these numbers, i.e. \(\SI{14.2}{m}\) and a \(\SI{3}{mm}\) radius disk shaped source without a collimator. That yields fig. 18. As we can see the size is more in line with our actually measured data.
Again, I looked at the raytracing results that Jaime sent to Cristina, which includes a file with suffix "CoolX". That plot is shown in fig. 19. As we can see, it is also much larger suddenly than shown in the PhD thesis (more than \(4 · \SI{4}{mm²}\)), slightly smaller than ours.
Note that the Nature paper mentions the source is about \(\SI{12}{m}\) away. I was never around when the X-ray finger was installed, nor do I have any good data about the real magnet size or lengths of the pipes between magnet and telescope.
So, uhh, yeah. This is all very confusing. No matter where one looks regarding this telescope, one is bound to find contradictions or just confusing statements… :)
2.11.2. Information from NuSTAR PhD thesis
I found the following PhD thesis: which is about the NuSTAR optic and also from DTU. It explains a lot of things:
- in the introductory part about multilayers it expands on why the low density material is at the top!
- Fig. 1.11 shows that indeed the spacers are 15° apart from one another.
- Fig. 1.11 mentions the graphite spacers are only 1.2 mm wide instead of 2 mm! But the DTU LLNL thesis explicitly mentions \(x_{gr} = \SI{2}{mm}\) on page 64.
- it has a plot of energy vs angle of the reflectivity similar to what we produce! It looks very similar.
- for the NuSTAR telescope they apparently have measurements of the surface roughness to μm levels, which are included in their simulations!
2.11.3. X-ray raytracers
Other X-ray raytracers:
- McXtrace from DTU and Synchrotron SOLEIL: https://www.mcxtrace.org/about/ https://github.com/McStasMcXtrace/McCode
- MTRAYOR (mentioned in DTU NuSTAR PhD thesis): written in Yorick https://github.com/LLNL/yorick https://en.wikipedia.org/wiki/Yorick_(programming_language) a language developed at LLNL! -> https://web.archive.org/web/20170102091157/http://www.jeh-tech.com/yorick.html for an 'introduction' https://ftp.spacecenter.dk/pub/njw/MT_RAYOR/mt_rayor_man4.pdf
We have the MTRAYOR code here: ./../../src/mt_rayor/ it needs Yorick, which can be found here:
2.11.4. DTU FTP server [/]
The DTU has a publicly accessible FTP server with a lot of useful information. I found it by googling for MTRAYOR, because the manual is found there.
https://ftp.spacecenter.dk/pub/njw/
I have a mirror of the entire FTP here: ./../../Documents/ftpDTU/
[ ]
Remove all files larger than X MB if they appear uninteresting to us.
2.11.5. Michael Pivovaroff talk about Axions, CAST, IAXO
Michael Pivovaroff giving a talk about axions, CAST, IAXO at LLNL: https://youtu.be/H_spkvp8Qkk
First he mentions: https://youtu.be/H_spkvp8Qkk?t=2372 "Then we took the telescope to PANTER" -> implying yes the CAST optic really was at PANTER. Then he says wrongly there was a 55Fe source at the other end of the magnet, showing the X-ray finger data + simulation below that title. And finally in https://youtu.be/H_spkvp8Qkk?t=2468 he says ABOUT HIS OWN RAYTRACING SIMULATION that it was a simulation for a source at infinity…
https://youtu.be/H_spkvp8Qkk?t=3134 He mentions Jaime and Julia wanted to write a paper about using NuSTAR data to set an ALP limit for reconversion of axions etc in the solar corona by looking at the center…
3. Theory
3.1. Solar axion flux
From ./../Papers/first_cast_results_physrevlett.94.121301.pdf
There are different analytical expressions for the solar axion flux for Primakoff production. These stem from the fact that a solar model is used to model the internal density, temperature, etc. in the Sun to compute the photon distribution (essentially the blackbody radiation) near the core. From it (after converting via the Primakoff effect) we get the axion flux.
Different solar models result in different expressions for the flux. The first one uses an older model, while the latter ones use newer models.
Analytical flux from first CAST result paper: g₁₀ = gaγ • 10¹⁰ GeV dΦa/dEa = g²₁₀ 3.821•10¹⁰ cm⁻²•s⁻¹•keV⁻¹ (Ea / keV)³ / (exp(Ea / (1.103 keV)) - 1) results in an integrated flux: Φa = g²₁₀ 3.67•10¹¹ cm⁻²•s⁻¹
In comparison I used in my master thesis:
def axion_flux_primakoff(w, g_ay): # axion flux produced by the Primakoff effect # in units of m^(-2) year^(-1) keV^(-1) val = 2.0 * 10**18 * (g_ay / 10**(-12) )**2 * w**(2.450) * np.exp(-0.829 * w) return val
(./../../Documents/Masterarbeit/PyAxionFlux/PyAxionFlux.py / ./../Code/CAST/PyAxionFlux/PyAxionFlux.py) The version I use is from the CAST paper about the axion electron coupling: ./../Papers/cast_axion_electron_jcap_2013_pnCCD.pdf eq. 3.1 on page 7.
Another comparison from here:
- Weighing the solar axion Contains, among others, a plot and (newer) description for the solar axion flux (useful as a comparison) ΦP₁₀ = 6.02e10.cm⁻²•s⁻¹•keV⁻¹ dΦa/dEa = ΦP₁₀ (gaγ / 1e-10.GeV⁻¹) * pow(Ea / 1.keV, 2.481) / (exp(Ea / (1.205.keV)))
3.1.1. Solar axion-electron flux
We compute the differential axion flux using ./../../CastData/ExternCode/AxionElectronLimit/src/readOpacityFile.nim
We have a version of the plot that is generated by it here:
but let's generate one from the setup we use as a "base" at CAST, namely the file: ./../resources/differential_flux_sun_earth_distance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv which uses a distance Sun ⇔ Earth of 0.989 AU, corresponding to the mean of all solar trackings we took at CAST.
import ggplotnim const path = "~/org/resources/differential_flux_sun_earth_distance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv" let df = readCsv(path) .filter(f{`type` notin ["LP Flux", "TP Flux", "57Fe Flux"]}) echo df ggplot(df, aes("Energy", "diffFlux", color = "type")) + geom_line() + xlab(r"Energy [$\si{keV}$]", margin = 1.5) + ylab(r"Flux [$\si{keV^{-1}.cm^{-2}.s^{-1}}$]", margin = 2.75) + ggtitle(r"Differential solar axion flux for $g_{ae} = \num{1e-13}, g_{aγ} = \SI{1e-12}{GeV^{-1}}$") + xlim(0, 10) + margin(top = 1.5, left = 3.25) + theme_transparent() + ggsave("~/org/Figs/statusAndProgress/differential_flux_sun_earth_distance/differential_solar_axion_fluxg_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.pdf", width = 800, height = 480, useTeX = true, standalone = true)
3.1.2. Radial production
Part of the raytracer are now
also plots about the radial emission for the production.With our default file (axion-electron)
solarModelFile = "solar_model_dataframe.csv"
running via:
./raytracer --ignoreDetWindow --ignoreGasAbs --suffix "_axion_electron" --sanity
yields
And for the Primakoff flux, using the new file:
solarModelFile = "solar_model_dataframe_fluxKind_fkAxionPhoton_0.989AU.csv" #solar_model_dataframe.csv"
running:
./raytracer --ignoreDetWindow --ignoreGasAbs --suffix "_primakoff" --sanity
we get
3.2. Axion conversion probability
Ref: Biljana's and Kreso's notes on the axion-photon interaction here: Further see the notes on the IAXO gas phase: which contains the explicit form of \(P\) in the next equation! I think it should be straight forward to derive this one from what's given in the former PDF in eq. (3.41) (or its derivation).
[ ]
Investigate this There is a chance it is non trivial due to Γ. The first PDF includes \(m_γ\), but does not mention gas in any way. So I'm not sure how one ends up at the latter. Potentially by 'folding' with the losses after the conversion?
The axion-photon conversion probability \(P_{a\rightarrow\gamma}\) in general is given by:
\begin{equation} \label{eq_conversion_prob} P_{a\rightarrow\gamma} = \left(\frac{g_{a\gamma} B}{2}\right)^2 \frac{1}{q^2 + \Gamma^2 / 4} \left[ 1 + e^{-\Gamma L} - 2e^{-\frac{\Gamma L}{2}} \cos(qL)\right], \end{equation}where \(\Gamma\) is the inverse absorption length for photons (or attenuation length).
The coherence condition for axions is
with \(L\) the length of the magnetic field (20m for IAXO, 10m for BabyIAXO), \(m_a\) the axion mass and \(E_a\) the axion energy (taken from solar axion spectrum).
In the presence of a low pressure gas, the photon receives an effective mass \(m_{\gamma}\), resulting in a new \(q\):
Thus, we first need some values for the effective photon mass in a low pressure gas, preferably helium.
From this we can see that coherence in the gas is restored if \(m_{\gamma} = m_a\), \(q \rightarrow 0\) for \(m_a \rightarrow m_{\gamma}\). This means that in those cases the energy of the incoming axion is irrelevant for the sensitivity!
Analytically the vacuum conversion probability can be derived from the expression eq. \eqref{eq_conversion_prob} by simplifying \(q\) for \(m_{\gamma} \rightarrow 0\) and \(\Gamma = 0\):
\begin{align} \label{eq_conversion_prob_vacuum} P_{a\rightarrow\gamma, \text{vacuum}} &= \left(\frac{g_{a\gamma} B}{2}\right)^2 \frac{1}{q^2} \left[ 1 + 1 - 2 \cos(qL) \right] \\ P_{a\rightarrow\gamma, \text{vacuum}} &= \left(\frac{g_{a\gamma} B}{2}\right)^2 \frac{2}{q^2} \left[ 1 - \cos(qL) \right] \\ P_{a\rightarrow\gamma, \text{vacuum}} &= \left(\frac{g_{a\gamma} B}{2}\right)^2 \frac{2}{q^2} \left[ 2 \sin^2\left(\frac{qL}{2}\right) \right] \\ P_{a\rightarrow\gamma, \text{vacuum}} &= \left(g_{a\gamma} B\right)^2 \frac{1}{q^2} \sin^2\left(\frac{qL}{2}\right) \\ P_{a\rightarrow\gamma, \text{vacuum}} &= \left(\frac{g_{a\gamma} B L}{2} \right)^2 \left(\frac{\sin\left(\frac{qL}{2}\right)}{ \left( \frac{qL}{2} \right)}\right)^2 \\ P_{a\rightarrow\gamma, \text{vacuum}} &= \left(\frac{g_{a\gamma} B L}{2} \right)^2 \left(\frac{\sin\left(\delta\right)}{\delta}\right)^2 \\ \end{align}The conversion probability in the simplified case amounts to:
\[ P(g_{aγ}, B, L) = \left(\frac{g_{aγ} \cdot B \cdot L}{2}\right)^2 \] in natural units, where the relevant numbers for the CAST magnet are:
- \(B = \SI{8.8}{T}\)
- \(L = \SI{9.26}{m}\)
and in the basic axion-electron analysis a fixed axion-photon coupling of \(g_{aγ} = \SI{1e-12}{\per\giga\electronvolt}\).
This requires either conversion of the equation into SI units by adding the "missing" constants or converting the SI units into natural units. As the result is a unit less number, the latter approach is simpler.
The conversion factors from Tesla and meter to natural units are as follows:
import unchained echo "Conversion factor Tesla: ", 1.T.toNaturalUnit() echo "Conversion factor Meter: ", 1.m.toNaturalUnit()
Conversion factor Tesla: 195.353 ElectronVolt² Conversion factor Meter: 5.06773e+06 ElectronVolt⁻¹
As such, the resulting conversion probability ends up as:
import unchained, math echo "9 T = ", 9.T.toNaturalUnit() echo "9.26 m = ", 9.26.m.toNaturalUnit() echo "P = ", pow( 1e-12.GeV⁻¹ * 9.T.toNaturalUnit() * 9.26.m.toNaturalUnit() / 2.0, 2.0)
9 T = 1758.18 ElectronVolt² 9.26 m = 4.69272e+07 ElectronVolt⁻¹ P = 1.701818225891982e-21
\begin{align} P(g_{aγ}, B, L) &= \left(\frac{g_{aγ} \cdot B \cdot L}{2}\right)^2 \\ &= \left(\frac{\SI{1e-12}{GeV^{-1}} \cdot \SI{1758.18}{eV^2} \cdot \SI{4.693e7}{eV^{-1}}}{2}\right)^2 \\ &= \num{1.702e-21} \end{align}Note that this is of the same (inverse) order of magnitude as the flux of solar axions (\(\sim10^{21}\) in some sensible unit of time), meaning the experiment expects \(\mathcal{O}(1)\) counts, which is sensible.
import unchained, math echo "9 T = ", 9.T.toNaturalUnit() echo "9.26 m = ", 9.26.m.toNaturalUnit() echo "P(natural) = ", pow( 1e-12.GeV⁻¹ * 9.T.toNaturalUnit() * 9.26.m.toNaturalUnit() / 2.0, 2.0) echo "P(SI) = ", ε0 * (hp / (2*π)) * (c^3) * (1e-12.GeV⁻¹ * 9.T * 9.26.m / 2.0)^2
3.2.1. Deriving the missing constants in the conversion probability
The conversion probability is given in natural units. In order to plug in SI units directly without the need for a conversion to natural units for the magnetic field and length, we need to reconstruct the missing constants.
The relevant constants in natural units are:
\begin{align*} ε_0 &= \SI{8.8541878128e-12}{A.s.V^{-1}.m^{-1}} \\ c &= \SI{299792458}{m.s^{-1}} \\ \hbar &= \frac{\SI{6.62607015e-34}{J.s}}{2π} \end{align*}which are each set to 1.
If we plug in the definition of a volt we get for \(ε_0\) units of:
\[ \left[ ε_0 \right] = \frac{\si{A^2.s^4}}{\si{kg.m^3}} \]
The conversion probability naively in natural units has units of:
\[ \left[ P_{aγ, \text{natural}} \right] = \frac{\si{T^2.m^2}}{J^2} = \frac{1}{\si{A^2.m^2}} \]
where we use the fact that \(g_{aγ}\) has units of \(\si{GeV^{-1}}\) which is equivalent to units of \(\si{J^{-1}}\) (care has to be taken with the rest of the conversion factors of course!) and Tesla in SI units:
\[ \left[ B \right] = \si{T} = \frac{\si{kg}}{\si{s^2.A}} \]
From the appearance of \(\si{A^2}\) in the units of \(P_{aγ, \text{natural}}\) we know a factor of \(ε_0\) is missing. This leaves the question of the correct powers of \(\hbar\) and \(c\), which come out to:
\begin{align*} \left[ ε_0 \hbar c^3 \right] &= \frac{\si{A^2.s^4}}{\si{kg.m^3}} \frac{\si{kg.m^2}}{\si{s}} \frac{\si{m^3}}{\si{s^3}} \\ &= \si{A^2.m^2}. \end{align*}So the correct expression in SI units is:
\[ P_{aγ} = ε_0 \hbar c^3 \left( \frac{g_{aγ} B L}{2} \right)^2 \]
where now only \(g_{aγ}\) needs to be expressed in units of \(\si{J^{-1}}\) for a correct result using tesla and meter.
3.3. Gaseous detector physics
I have a big confusion.
In the Bethe equation there is the factor I
, the mean excitation
energy. It is roughly \(I(Z) = 10 Z\), where \(Z\) is the charge of the
element.
To determine the number of primary electrons however we have the distinction between:
- the actual excitation energy of the element / the molecules, e.g. ~15 eV for Argon gas
- the "average ionization energy per ion" \(w\), which is the well known 26 eV for Argon gas
- where does the difference between \(I\) and \(w\) come from? What does one mean vs. the other? They are different by a factor of 10 after all!
- why the large distinction between excitation energy and average energy per ion? Is it only because of rotational / vibrational modes of the molecules?
Relevant references:
- PDG chapter 33 (Bethe, losses) and 34 (Gaseous detector)
- Mean excitation energies for the stopping power of atoms and molecules evaluated from oscillator-strength spectra https://aip.scitation.org/doi/10.1063/1.2345478 about ionization energy I
- A method to improve tracking and particle identification in TPCs and silicon detectors https://doi.org/10.1016/j.nima.2006.03.009 About more correct losses in gases
This is all very confusing.
3.3.1. Average distance X-rays travel in Argon at CAST conditions [/]
In order to be able to compute the correct distance to use in the raytracer for the position of the axion image, we need a good understanding of where the average X-ray will convert in the gas.
By combining the expected axion flux (or rather that folded with the telescope and window transmission to get the correct energy distribution) with the absorption length of X-rays at different energies we can compute a weighted mean of all X-rays and come up with a single number.
For that reason we wrote xrayAttenuation.
Let's give it a try.
- Analytical approach
import xrayAttenuation, ggplotnim, unchained # 1. read the file containing efficiencies var effDf = readCsv("/home/basti/org/resources/combined_detector_efficiencies.csv") .mutate(f{"NoGasEff" ~ idx("300nm SiN") * idx("20nm Al") * `LLNL`}) # 2. compute the absorption length for Argon let ar = Argon.init() let ρ_Ar = density(1050.mbar.to(Pascal), 293.K, ar.molarMass) effDf = effDf .filter(f{idx("Energy [keV]") > 0.05}) .mutate(f{float: "l_abs" ~ absorptionLength(ar, ρ_Ar, idx("Energy [keV]").keV).float}) # compute the weighted mean of the effective flux behind the window with the # absorption length, i.e. # `<x> = Σ_i (ω_i x_i) / Σ_i ω_i` let weightedMean = (effDf["NoGasEff", float] *. effDf["l_abs", float]).sum() / effDf["NoGasEff", float].sum() echo "Weighted mean of distance: ", weightedMean.Meter.to(cm) # for reference the effective flux: ggplot(effDf, aes("Energy [keV]", "NoGasEff")) + geom_line() + ggsave("/tmp/combined_efficiency_no_gas.pdf") ggplot(effDf, aes("Energy [keV]", "l_abs")) + geom_line() + ggsave("/tmp/absorption_length_argon_cast.pdf")
This means the "effective" position of the axion image should be 0.0122 m or 1.22 cm in the detector. This is (fortunately) relatively close to the 1.5 cm (center of the detector) that we used so far.
[X]
Is the above even correct? The absorption length describes the distance at which only 1/e particles are left. That means at that distance (1 - 1/e) have disappeared. To get a number don't we need to do a monte carlo (or some kind of integral) of the average? -> Well, the mean of an exponential distribution is 1/λ (if defined as \(\exp(-λx)\)!), from that point of view I think the above is perfectly adequate! Note however that the median of the distribution is \(\frac{\ln 2}{λ}\)! When looking at the distribution of our transverse RMS values for example the peak corresponds to something that is closer to the median (but is not exactly the median either; the peak is the 'mode' of the distribution). Arguably more interesting is the cutoff we see in the data as that corresponds to the largest possible diffusion (but again that is being folded with the statistics of getting a larger RMS! :/ )
UPDATE:
See the section below for the numerical approach. As it turns out the above unfortunately is not correct for 3 important reasons (2 of which we were aware of):- It does not include the axion spectrum, it changes the location of the mean slightly.
- It implicitly assumes all X-rays of all energies will be detected. This implies an infinitely long detector and not our detector limited by a length of 3 cm! This skews the actual mean to lower values, because the mean of those that are detected are at smaller values.
- Point 2 implies not only that some X-rays won't be detected, but effectively it gives a higher weight to energies that are absorbed with certainty compared to those that sometimes are not absorbed! This further reduces the mean. This can be interpreted as reducing the input flux by the percentage of the absorption probability for each energy. In this sense the above needs to be multiplied by the absorption probability to be more correct! Yet this still does not make it completely right, as that just assumes the fraction of photons of a given energy are reduced, but not that all detected ones have lengths consistent with a 3cm long volume!
- (minor) does not include isobutane.
A (shortened and) improved version of the above (but still not quite correct!):
import xrayAttenuation, ggplotnim, unchained # 1. read the file containing efficiencies var effDf = readCsv("/home/basti/org/resources/combined_detector_efficiencies.csv") .mutate(f{"NoGasEff" ~ idx("300nm SiN") * idx("20nm Al") * `LLNL` * idx("30mm Ar Abs.")}) # 2. compute the absorption length for Argon let ar = Argon.init() let ρ_Ar = density(1050.mbar.to(Pascal), 293.K, ar.molarMass) effDf = effDf.filter(f{idx("Energy [keV]") > 0.05}) .mutate(f{float: "l_abs" ~ absorptionLength(ar, ρ_Ar, idx("Energy [keV]").keV).float}) let weightedMean = (effDf["NoGasEff", float] *. effDf["l_abs", float]).sum() / effDf["NoGasEff", float].sum() echo "Weighted mean of distance: ", weightedMean.Meter.to(cm)
We could further multiply in the axion flux of course, but as this cannot be fully correct in this way, we'll do it numerically. We would have to calculate the real mean of the exponential distribution for each energy based on the truncated exponential distribution. Effectively we have a bonded exponential between 0 and 3 cm, whose mean is of course going to differ from the parameter \(λ\).
- Numerical approach
Let's write a version of the above code that computes the result by sampling from the exponential distribution for the conversion point.
What we need:
- our sampling logic
- sampling from exponential distribution depending on energy
- the axion flux
Let's start by importing the modules we need:
import helpers / sampling_helper # sampling distributions import unchained # sane units import ggplotnim # see something! import xrayAttenuation # window efficiencies import math, sequtils
where the
sampling_helpers
is a small module to sample from a procedure or a sequence.In addition let's define some helpers:
from os import `/` const ResourcePath = "/home/basti/org/resources" const OutputPath = "/home/basti/org/Figs/statusAndProgress/axion_conversion_point_sampling/"
Now let's read the LLNL telescope efficiency as well as the axion flux model. Note that we may wish to calculate the absorption points not only for a specific axion flux model, but potentially any other kind of signal. We'll build in functionality to disable different contributions.
let dfAx = readCsv(ResourcePath / "solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15.csv") .filter(f{`type` == "Total flux"}) let dfLLNL = readCsv(ResourcePath / "llnl_xray_telescope_cast_effective_area_parallel_light_DTU_thesis.csv") .mutate(f{"Efficiency" ~ idx("EffectiveArea[cm²]") / (PI * 2.15 * 2.15)})
Note: to get the differential axion flux use
readOpacityFile
from https://github.com/jovoy/AxionElectronLimit. It generates the CSV file.Next up we need to define the material properties of the detector window in order to compute its transmission.
let Si₃N₄ = compound((Si, 3), (N, 4)) # actual window const ρSiN = 3.44.g•cm⁻³ const lSiN = 300.nm # window thickness let Al = Aluminium.init() # aluminium coating const ρAl = 2.7.g•cm⁻³ const lAl = 20.nm # coating thickness
With these numbers we can compute the transmission at an arbitrary energy. In order to compute the correct inputs for the calculation we now have everything. We wish to compute the following, the intensity \(I(E)\) is the flux that enters the detector
\[ I(E) = f(E) · ε_{\text{LLNL}} · ε_{\ce{Si3.N4}} · ε_{\ce{Al}} \]
where \(f(E)\) is the solar axion flux and the \(ε_i\) are the efficiencies associated with the telescope and transmission of the window. The idea is to sample from this intensity distribution to get a realistic set of X-rays as they would be experienced in the experiment. One technical aspect still to be done is an interpolation of the axion flux and LLNL telescope efficiency to evaluate the data at an arbitrary energy as to define a function that yields \(I(E)\).
Important note: We fully neglect here the conversion probability and area of the magnet bore. These (as well as a potential time component) are purely constants and do not affect the shape of the distribution \(I(E)\). We want to sample from it to get the correct weighting of the different energies, but do not care about absolute numbers. So differential fluxes are fine.
The idea is to define the interpolators and then create a procedure that captures the previously defined properties and interpolators.
from numericalnim import newLinear1D, eval let axInterp = newLinear1D(dfAx["Energy", float].toSeq1D, dfAx["diffFlux", float].toSeq1D) let llnlInterp = newLinear1D(dfLLNL["Energy[keV]", float].toSeq1D, dfLLNL["Efficiency", float].toSeq1D)
With the interpolators defined let's write the implementation for \(I(E)\):
proc I(E: keV): float = ## Compute the intensity of the axion flux after telescope & window eff. ## ## Axion flux and LLNL efficiency can be disabled by compiling with ## `-d:noAxionFlux` and `-d:noLLNL`, respectively. result = transmission(Si₃N₄, ρSiN, lSiN, E) * transmission(Al, ρAl, lAl, E) when not defined(noAxionFlux): result *= axInterp.eval(E.float) when not defined(noLLNL): result *= llnlInterp.eval(E.float)
Let's test it and see what we get for e.g. \(\SI{1}{keV}\):
echo I(1.keV)
yields \(1.249e20\). Not the most insightful, but it seems to work. Let's plot it:
let energies = linspace(0.01, 10.0, 1000).mapIt(it.keV) let Is = energies.mapIt(I(it)) block PlotI: let df = toDf({ "E [keV]" : energies.mapIt(it.float), "I" : Is }) ggplot(df, aes("E [keV]", "I")) + geom_line() + ggtitle("Intensity entering the detector gas") + ggsave(OutputPath / "intensity_axion_conversion_point_simulation.pdf")
shown in fig. 20. It looks exactly as we would expect.
Now we define the sampler for the intensity distribution \(I(E)\), which returns an energy weighted by \(I(E)\):
let Isampler = sampler( (proc(x: float): float = I(x.keV)), # wrap `I(E)` to take `float` 0.01, 10.0, num = 1000 # use 1000 points for EDF & sample in 0.01 to 10 keV )
and define a random number generator:
import random var rnd = initRand(0x42)
First we will sample 100,000 energies from the distribution to see if we recover the intensity plot from before.
block ISampled: const nmc = 100_000 let df = toDf( {"E [keV]" : toSeq(0 ..< nmc).mapIt(rnd.sample(Isampler)) }) ggplot(df, aes("E [keV]")) + geom_histogram(bins = 200, hdKind = hdOutline) + ggtitle("Energies sampled from I(E)") + ggsave(OutputPath / "energies_intensity_sampled.pdf")
This yields fig. 21, which clearly shows the sampling works as intended.
The final piece now is to use the same sampling logic to generate energies according to \(I(E)\), which correspond to X-rays of said energy entering the detector. For each of these energies then sample from the Beer-Lambert law
\[ I(z) = I_0 \exp\left[ - \frac{z}{l_{\text{abs}} } \right] \] where \(I_0\) is some initial intensity and \(l_\text{abs}\) the absorption length. The absorption length is computed from the gas mixture properties of the gas used at CAST, namely Argon/Isobutane 97.7/2.3 at \(\SI{1050}{mbar}\). It is the inverse of the attenuation coefficient \(μ_M\)
\[ l_{\text{abs}} = \frac{1}{μ_M} \]
where the attenuation coefficient is computed via
\[ μ_m = \frac{N_A}{M * σ_A} \]
with \(N_A\) Avogadro's constant, \(M\) the molar mass of the compound and \(σ_A\) the atomic absorption cross section. The latter again is defined by
\[ σ_A = 2 r_e λ f₂ \]
with \(r_e\) the classical electron radius, \(λ\) the wavelength of the X-ray and \(f₂\) the second scattering factor. Scattering factors are tabulated for different elements, for example by NIST and Henke. For a further discussion of this see the README and implementation of
xrayAttenuation
.We will now go ahead and define the CAST gas mixture:
proc initCASTGasMixture(): GasMixture = ## Returns the absorption length for the given energy in keV for CAST ## gas conditions: ## - Argon / Isobutane 97.7 / 2.3 % ## - 20°C ( for this difference in temperature barely matters) let arC = compound((Ar, 1)) # need Argon gas as a Compound let isobutane = compound((C, 4), (H, 10)) # define the gas mixture result = initGasMixture(293.K, 1050.mbar, [(arC, 0.977), (isobutane, 0.023)]) let gm = initCASTGasMixture()
To sample from the Beer-Lambert law with a given absorption length we also define a helper that returns a sampler for the target energy using the definition of a normalized exponential distribution
\[ f_e(x, λ) = \frac{1}{λ} \exp \left[ -\frac{x}{λ} \right] \]
The sampling of the conversion point is the crucial aspect of this. Naively we might want to sample between the detector volume from 0 to \(\SI{3}{cm}\). However, this skews our result. Our calculation depends on the energy distribution of the incoming X-rays. If the absorption length is long enough the probability of reaching the readout plane and thus not being detected is significant. Restricting the sampler to \(\SI{3}{cm}\) would pretend that independent of absorption length we would always convert within the volume, giving too large a weight to the energies that should sometimes not be detected!
Let's define the sampler now. It takes the gas mixture and the target energy. A constant
SampleTo
is defined to adjust the position to which we sample at compile time (to play around with different numbers).proc generateSampler(gm: GasMixture, targetEnergy: keV): Sampler = ## Generate the exponential distribution to sample from based on the ## given absorption length # `xrayAttenuation` `absorptionLength` returns number in meter! let λ = absorptionLength(gm, targetEnergy).to(cm) let fnSample = (proc(x: float): float = result = expFn(x, λ.float) # expFn = 1/λ · exp(-x/λ) ) const SampleTo {.intdefine.} = 20 ## `SampleTo` can be set via `-d:SampleTo=<int>` let num = (SampleTo.float / 3.0 * 1000).round.int # number of points to sample at result = sampler(fnSample, 0.0, SampleTo, num = num)
Note that this is inefficient, because we generate a new sampler from which we only sample a single point, namely the conversion point of that X-ray. If one intended to perform a more complex calculation or wanted to sample orders of magnitude more X-rays, one should either restructure the code (i.e. sample from known energies and then reorder based on the weight defined by \(I(E)\) or cache the samplers and pre-bin the energies.
For reference let's compute the absorption length as a function of energy for the CAST gas mixture:
block GasAbs: let df = toDf({ "E [keV]" : linspace(0.03, 10.0, 1000), "l_abs [cm]" : linspace(0.03, 10.0, 1000).mapIt(absorptionLength(gm, it.keV).m.to(cm).float) }) ggplot(df, aes("E [keV]", "l_abs [cm]")) + geom_line() + ggtitle("Absorption length of X-rays in CAST gas mixture: " & $gm) + margin(top = 1.5) + ggsave(OutputPath / "cast_gas_absorption_length.pdf")
which yields fig. 22
So, finally: let's write the MC sampling!
const nmc = 500_000 # start with 100k samples var Es = newSeqOfCap[keV](nmc) var zs = newSeqOfCap[cm](nmc) while zs.len < nmc: # 1. sample an energy according to `I(E)` let E = rnd.sample(Isampler).keV # 2. get the sampler for this energy let distSampler = generateSampler(gm, E) # 3. sample from it var z = Inf.cm when defined(Equiv3cmSampling): ## To get the same result as directly sampling ## only up to 3 cm use the following code while z > 3.0.cm: z = rnd.sample(distSampler).cm elif defined(UnboundedVolume): ## This branch pretends the detection volume ## is unbounded if we sample within 20cm z = rnd.sample(distSampler).cm else: ## This branch is the physically correct one. If an X-ray reaches the ## readout plane it is _not_ recorded, but it was still part of the ## incoming flux! z = rnd.sample(distSampler).cm if z > 3.0.cm: continue # just drop this X-ray zs.add z Es.add E
Great, now we have sampled the conversion points according to the correct intensity. We can now ask for statistics or create different plots (e.g. conversion point by energies etc.).
import stats, seqmath # mean, variance and percentile let zsF = zs.mapIt(it.float) # for math echo "Mean conversion position = ", zsF.mean().cm echo "Median conversion position = ", zsF.percentile(50).cm echo "Variance of conversion position = ", zsF.variance().cm
This prints the following:
Mean conversion position = 0.556813 cm Median conversion position = 0.292802 cm Variance of conversion position = 0.424726 cm
As we can see (unfortunately) our initial assumption of a mean distance of \(\SI{1.22}{cm}\) are quite of the mark. The more realistic number is only \(\SI{0.56}{cm}\). And if we were to use the median it's only \(\SI{0.29}{cm}\).
Let's plot the conversion points of all sampled (and recorded!) X-rays as well as what their distribution against energy looks like.
let dfZ = toDf({ "E [keV]" : Es.mapIt(it.float), "z [cm]" : zs.mapIt(it.float) }) ggplot(dfZ, aes("z [cm]")) + geom_histogram(bins = 200, hdKind = hdOutline) + ggtitle("Conversion points of all sampled X-rays according to I(E)") + ggsave(OutputPath / "sampled_axion_conversion_points.pdf") ggplot(dfZ, aes("E [keV]", "z [cm]")) + geom_point(size = 1.0, alpha = 0.2) + ggtitle("Conversion points of all sampled X-rays according to I(E) against their energy") + ggsave(OutputPath / "sampled_axion_conversion_points_vs_energy.png", width = 1200, height = 800)
The former is shown in fig. 23. The overlapping exponential distribution is obvious, as one would expect. The same data is shown in fig. 24, but in this case not as a histogram, but by their energy as a scatter plot. We can clearly see the impact of the absorption length on the conversion points for each energy!
- Compiling and running the code
The code above is written in literate programming style. To compile and run it we use
ntangle
to extract it from the Org file:ntangle <file>
which generates ./../../../../tmp/sample_axion_xrays_conversion_points.nim.
Compiling and running it can be done via:
nim r -d:danger /tmp/sample_axion_xrays_conversion_points.nim
which compiles and runs it as an optimized build.
We have the following compilation flags to compute different cases:
-d:noLLNL
: do not include the LLNL efficiency into the input intensity-d:noAxionFlux
: do not include the axion flux into the input intensity-d:SampleTo=<int>
: change to where we sample the position (only to 3cm for example)-d:UnboundedVolume
: if used together with the defaultSampleTo
(or any large value) will effectively compute the case of an unbounded detection volume (i.e. every X-ray recorded with 100% certainty).-d:Equiv3cmSampling
: Running this with the defaultSampleTo
(or any large value) will effectively change the sampling to a maximum \SI{3}{cm} sampling. This can be used as a good crossheck to verify the sampling behavior is independent of the sampling range.
Configurations of note:
nim r -d:danger -d:noAxionFlux /tmp/sample_axion_xrays_conversion_points.nim
\(⇒\) realistic case for a flat input spectrum Yields:
Mean conversion position = 0.712102 cm Median conversion position = 0.445233 cm Variance of conversion position = 0.528094 cm
nim r -d:danger -d:noAxionFlux -d:UnboundedVolume /tmp/sample_axion_xrays_conversion_points.nim
\(⇒\) the closest analogue to the analytical calculation from section 3.3.1.1 (outside of including isobutane here) Yields:
Mean conversion position = 1.25789 cm Median conversion position = 0.560379 cm Variance of conversion position = 3.63818 cm
nim r -d:danger /tmp/sample_axion_xrays_conversion_points.nim
\(⇒\) the case we most care about and of which the numbers are mentioned in the text above.
- Absorption edge in data
Question:
[X]
Can we see the absorption edge of Argon in our data? E.g. in the transverse RMS of the CDL data? In theory we should see a huge jump in the transverse nature (and cluster size) of the clusters above and below that point. MAYBE this could also relate to the strong cutoff we see in our background rate at 3 keV due to some effect of the efficiency of our cuts changing significantly there?
If my "theory" is correct it would mean that the transverse RMS should be significantly different if I cut to the energy for e.g. the photo peak and escape peak?
Update
: As explained in multiple places since the above two TODOs were written. It's not as straightforward, because the exponential distribution still implies that a large fraction of events convert close to the cathode. The result is a smoothed out distribution of the RMS data, making the difference between escape and photo peak for example not as extreme as one might imagine. See the simulations below and the related FADC rise time simulations for more insight.
3.3.2. Simulating longitudinal and transverse cluster sizes using MC
Sample from distribution:
import std / [random, sequtils, algorithm] import seqmath, ggplotnim template toEDF(data: seq[float], isCumSum = false): untyped = ## Computes the EDF of binned data var dataCdf = data if not isCumSum: seqmath.cumsum(dataCdf) let integral = dataCdf[^1] let baseline = min(data) # 0.0 dataCdf.mapIt((it - baseline) / (integral - baseline)) proc sample(cdf: seq[float], ys: seq[float]): float = let point = rand(1.0) let idx = cdf.lowerBound(point) if idx < cdf.len: result = ys[idx] else: result = Inf proc expFn(x: float, λ: float): float = result = 1.0 / λ * exp(- x / λ) const Upper = 3.0 const λ = 2.0 let xs = linspace(0.0, Upper, 1000) let ys = xs.mapIt(expFn(it, λ)) # now sample 100,000 points let cdf = ys.toEdf() let ySampled = toSeq(0 ..< 1_000_000).mapIt(sample(cdf, xs)) let dfS = toDf(ySampled) ggplot(toDf(xs, cdf), aes("xs", "cdf")) + geom_line() + ggsave("/t/test_cdf.pdf") echo dfS # rescale according to normalization of the range we use # normalize by y = y / (∫_Lower^Upper f(x) dx) = # Lower = 0, Upper = 3.0 (`Upper`) # y = y / (∫_0^Upper 1/λ exp(-x/λ) dx = y / [ ( -exp(-x/λ) )|^Upper_0 ] # y = y / [ (-exp(-Upper/λ) - (-exp(-Lower/λ) ) ] # y = y / [ (-exp(-3.0/λ)) + 1 ] ^--- 1 = exp(0) let df = toDf(xs, ys) .mutate(f{"ys" ~ `ys` / (-exp(-Upper / λ) + 1.0)}) ggplot(df, aes("xs")) + geom_line(aes = aes(y = "ys")) + geom_histogram(data = dfS, aes = aes(x = "ySampled"), bins = 100, density = true, alpha = 0.5, hdKind = hdOutline, fillColor = "red") + ggsave("/t/test_sample.pdf")
The below is also in: ./../../CastData/ExternCode/TimepixAnalysis/NimUtil/helpers/sampling_helper.nim
import std / [random, sequtils, algorithm] import seqmath, ggplotnim template toEDF*(data: seq[float], isCumSum = false): untyped = ## Computes the EDF of binned data var dataCdf = data if not isCumSum: seqmath.cumsum(dataCdf) let integral = dataCdf[^1] ## XXX: why min? let baseline = min(data) # 0.0 dataCdf.mapIt((it - baseline) / (integral - baseline)) proc sample*(cdf: seq[float], ys: seq[float]): float = let point = rand(1.0) let idx = cdf.lowerBound(point) if idx < cdf.len: result = ys[idx] else: result = Inf proc expFn(x: float, λ: float): float = result = 1.0 / λ * exp(- x / λ) proc sampleFrom*(fn: proc(x: float): float, low, high: float, num = 1000, samples = 1_000_000): seq[float] = ## Note: it may be useful to hand a closure with wrapped arguments! let xs = linspace(low, high, num) let ys = xs.mapIt( fn(it) ) # now sample 100,000 points let cdf = ys.toEdf() result = toSeq(0 ..< samples).mapIt(sample(cdf, xs)) when isMainModule: ## Mini test: Compare with plot output from /tmp/test_sample.nim! let λ = 2.0 let fnSample = (proc(x: float): float = result = expFn(x, λ) ) let ySampled = sampleFrom(fnSample, 0.0, 3.0) let ySampled2 = sampleFrom(fnSample, 0.0, 10.0) proc toHisto(xs: seq[float]): DataFrame = const binSize = 0.1 let binNum = ((xs.max - xs.min) / binSize).round.int let (hist, bins) = histogram(xs, binNum) let maxH = hist.max result = toDf({"x" : bins[0 ..< ^2], "y" : hist.mapIt(it / maxH)}) let dfC = bind_rows([("1", ySampled.toHisto()), ("2", ySampled2.toHisto())], "val") ggplot(dfC, aes("x", "y", fill = "val")) + #geom_histogram(bins = 100, density = true, alpha = 0.5, hdKind = hdOutline, fillColor = "red") + geom_histogram(bins = 100, alpha = 0.5, hdKind = hdOutline, stat = "identity", position = "identity") + ggsave("/t/test_sample_from.pdf")
Now use that to sample from our exponential to determine typical conversion points of X-rays. The exponential decay according to the Lambert-Beer (attenuation) law tells us something about the inverse decay likelihood?
Effectively it's the same as radioactive decay, where for each distance in the medium it is a Poisson process depending on the elements still present.
So the idea is to MC N samples that enter the cathode. At each step Δx we sample the Poisson process to find the likelihood of a decay. If it stays, cool. If not, its position is added to our decay (or in this case photoelectron origin) positions.
The result of that is precisely the percentage of the exponential distribution of course! This means we can use the exponential distribution as the starting point for our sampling of the diffusion for each event. We sample from the exponential, get a position each time where a particle may have converted, then based on that position we compute a target size, which we do by drawing from a normal distribution centered around the longitudinal / transverse diffusion coefficients, as these after all represent the 1σ sizes of the diffusion. So in effect what we're actually just computing is the exponential distribution of our data folded with a normal distribution. In theory we could just compute that somehow.
import std / [random, sequtils, algorithm, strformat] import seqmath, ggplotnim, unchained import /tmp/sampling_helper proc expFn(x: float, λ: float): float = result = 1.0 / λ * exp(- x / λ) proc main(λ: float) = let σT = 640.0 # μm·√cm let fnSample = (proc(x: float): float = result = expFn(x, λ) ) proc rmsTrans(x: float): float = let toDrift = (3.0 - x) result = sqrt(toDrift) * σT # sample from our exponential distribution describing absorption let ySampled = sampleFrom(fnSample, 0.0, 3.0) # now compute the long and trans RMS for each let yRmsTrans = ySampled.mapIt(rmsTrans(it)) ggplot(toDf(yRmsTrans), aes("yRmsTrans")) + geom_histogram(bins = 100, density = true, alpha = 0.5, hdKind = hdOutline, fillColor = "red") + ggsave(&"/t/sample_transverse_rms_{λ}_cm_absorption_length.pdf") #let sampleTransFn = (proc(x: float): float = # result = gaus(x = x, mean = σT, when isMainModule: import cligen dispatch main
The above already produces quite decent results in terms of the transverse RMS for known absorption lengths!
basti at voidRipper in /t λ ./simulate_rms_transverse_simple --λ 3.0 # below Ar absorption edge basti at voidRipper in /t λ ./simulate_rms_transverse_simple --λ 2.2 # 5.9 keV basti at voidRipper in /t λ ./simulate_rms_transverse_simple --λ 0.5 # 3.x keV above Ar absorption edge
yields:
These need to be compared to equivalent plots from CAST / CDL data.
- CAST 5.9 keV (Photo):
- CAST 3.0 keV (Escape):
- CDL C-EPIC-0.6 (~250 eV, extremely low λ):
- CDL Ag-Ag-6kV (3 keV, λ > 3cm):
- CDL Ti-Ti-9kV (4.5 keV, λ ~ 1cm):
- CDL Mn-Cr-12kV (5.9 keV, λ ~ 2.2cm):
For all the plots:
cd /tmp/ mkdir RmsTransversePlots && cd RmsTransversePlots
For the CAST plots:
plotData --h5file ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --runType rtCalibration \ --chips 3 \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --ingrid \ --cuts '("rmsTransverse", 0.1, 1.5)' \ --cuts '("energyFromCharge", 2.5, 3.2)' \ --applyAllCuts \ --region crSilver
plotData --h5file ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --runType rtCalibration \ --chips 3 \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --ingrid \ --cuts '("rmsTransverse", 0.1, 1.5)' \ --cuts '("energyFromCharge", 5.5, 6.5)' \ --applyAllCuts \ --region crSilver
For the CDL plots:
cdl_spectrum_creation -i ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 --dumpAccurate --hideNloptFit
yields the plots initially in
/tmp/RmsTransversePlots/out/CDL_2019_Raw_<SomeDate>
The main takeaway from these plots is: Especially for the cases with longer absorption the shape actually matches quite nicely already! Of course the hard cutoff in the simulation is not present in the real data, which makes sense (we use the same transverse value only dependent on the height; max height = max value). However, for the C-EPIC & the 0.5 absorption length the differences are quite big. Likely because the diffusion is not actually fixed, but itself follows some kind of normal distribution around the mean value. That latter at least is what we will implement now, using the width of the somewhat gaussian distribution of the C-EPIC 0.6kV data as a reference.
The next code snippet does exactly that, it adds sampling from a normal distribution with mean of the transverse diffusion and a width described roughly by the width from the C-EPIC 0.6kV data above so that each sample is spread somewhat.
import std / [random, sequtils, algorithm, strformat] import seqmath, ggplotnim, unchained import /tmp/sampling_helper proc expFn(x: float, λ: float): float = result = 1.0 / λ * exp(- x / λ) import random / mersenne import alea / [core, rng, gauss] proc main(E = 5.9, λ = 0.0) = ## Introduce sampling of a gaussian around σT with something like this ## which is ~150 = 1σ for a √3cm drift (seen in C-EPIC 0.6 kV CDL line ## rmsTransverse data) ## Note: another number we have for ΔσT is of course the simulation error ## on σT, but I suspect that's not a good idea (also it's large, but still ## much smaller than this). let ΔσT = 86.0 # / 2.0 ## XXX: Implement calculation of absorption length from `xrayAttenuation` # let dfAbs = ## XXX: Implement extraction of diffusion values from data: let dfGas = readCsv("/home/basti/org/resources/ar_iso_97_7_2_3_septemboard_cast_different_temps.csv") let σT = 640.0 # μm/√cm let fnSample = (proc(x: float): float = result = expFn(x, λ) ) var rnd = wrap(initMersenneTwister(1337)) var gaus = gaussian(0.0, 1.0) # we will modify this gaussian for every draw! proc rmsTrans(x: float): float = let toDrift = (3.0 - x) # adjust the gaussian to Diffusion = σ_T · √(drift distance) # and width of Sigma = ΔσT · √(drift distance) (at 3 cm we want Δ of 150) gaus.mu = sqrt(toDrift) * σT gaus.sigma = ΔσT * sqrt(toDrift) #echo "DRAWING AROUND: ", gaus.mu, " WITH SIGMA: ", gaus.sigma result = rnd.sample(gaus) # sample from our exponential distribution describing absorption let ySampled = sampleFrom(fnSample, 0.0, 3.0) # now compute the long and trans RMS for each let yRmsTrans = ySampled.mapIt(rmsTrans(it)) let GoldenMean = (sqrt(5.0) - 1.0) / 2.0 # Aesthetic ratio FigWidth = 1200.0 # width in pixels FigHeight = FigWidth * GoldenMean # height in pixels ggplot(toDf(yRmsTrans), aes("yRmsTrans")) + geom_histogram(bins = 100, density = true, alpha = 0.5, hdKind = hdOutline, fillColor = "red") + ggsave(&"/t/sample_gauss_transverse_rms_{λ}_cm_absorption_length.pdf", width = FigWidth, height = FigHeight) when isMainModule: import cligen dispatch main
Let's generate the same cases we already generated with the simple version before:
basti at voidRipper in /t λ ./simulate_rms_transverse_gauss --λ 3.0 basti at voidRipper in /t λ ./simulate_rms_transverse_gauss --λ 2.2 basti at voidRipper in /t λ ./simulate_rms_transverse_gauss --λ 2.0 basti at voidRipper in /t λ ./simulate_rms_transverse_gauss --λ 1.0 basti at voidRipper in /t λ ./simulate_rms_transverse_gauss --λ 0.5 basti at voidRipper in /t λ ./simulate_rms_transverse_gauss --λ 0.1
First of all we can see that the 0.1 and 0.5 cm absorption length case are almost fully gaussian. The other cases have the typical asymmetric shape we expect.
Let's generate raw CDL plots (from plotData
with minimal cuts, esp no
rmsTransverse
cut):
For C-EPIC-0.6kV (~250 eV, extremely low λ)
plotData --h5file ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --runType rtCalibration \ --chips 3 \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --ingrid \ --cuts '("rmsTransverse", 0.1, 1.5)' \ --applyAllCuts \ --runs 342 --runs 343 \ --region crSilver
For Ag-Ag-6kV (3 keV, λ > 3cm):
plotData --h5file ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --runType rtCalibration \ --chips 3 \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --ingrid \ --cuts '("rmsTransverse", 0.1, 1.5)' \ --applyAllCuts \ --runs 328 --runs 329 --runs 351 \ --region crSilver
For Ti-Ti-9kV (4.5 keV, λ ~ 1cm):
plotData --h5file ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --runType rtCalibration \ --chips 3 \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --ingrid \ --cuts '("rmsTransverse", 0.1, 1.5)' \ --applyAllCuts \ --runs 328 --runs 329 --runs 351 \ --region crSilver
which are:
where we see that the very short absorption length case of C-EPIC
0.6kV is indeed also almost gaussian and has a long tail even to the
right side. As a matter of fact it seems like it even has a skewness
towards the right instead of left. However, that is likely due to
double hits etc. in the data, which we did not filter out in this
version (compare with the cdl_spectrum_creation
version above).
So to summarize comparing these 'raw' plots against our simulation, especially the higher absorption length plots do actually fit quite nicely, if one considers the simplicity of our simulation and the fact that the width of the gaussian we smear with is pretty much just guessed. The differences that are still present are very likely due to all sorts of other reasons that affect the size of the clusters and how our detector resolves them beyond simply assuming the diffusion coefficient is different! This is more of an "effective theory" for the problem, incorporating the real variances that happen at a fixed transverse diffusion by merging them into a variance onto the diffusion itself, which is clearly lacking as a method.
Anything else to do here?
[ ]
could simulate the same for the longitudinal case[ ]
could simulate expected rise times base don longitudinal data.
3.4. Polarization of X-rays and relation to axions
See the discussion in ./../void_settings.html.
4. General information
4.1. X-ray fluorescence lines
X-Ray Data Booklet Table 1-2. Photon energies, in electron volts, of principal K-, L-, and M-shell emission lines. (from: https://xdb.lbl.gov/Section1/Table_1-2.pdf)
Z | Element | Kα1 | Kα2 | Kβ1 | Lα1 | Lα2 | Lβ1 | Lβ2 | Lγ1 | Mα1 |
---|---|---|---|---|---|---|---|---|---|---|
3 | Li | 54.3 | ||||||||
4 | Be | 108.5 | ||||||||
5 | B | 183.3 | ||||||||
6 | C | 277 | ||||||||
7 | N | 392.4 | ||||||||
8 | O | 524.9 | ||||||||
9 | F | 676.8 | ||||||||
10 | Ne | 848.6 | 848.6 | |||||||
11 | Na | 1,040.98 | 1,040.98 | 1,071.1 | ||||||
12 | Mg | 1,253.60 | 1,253.60 | 1,302.2 | ||||||
13 | Al | 1,486.70 | 1,486.27 | 1,557.45 | ||||||
14 | Si | 1,739.98 | 1,739.38 | 1,835.94 | ||||||
15 | P | 2,013.7 | 2,012.7 | 2,139.1 | ||||||
16 | S | 2,307.84 | 2,306.64 | 2,464.04 | ||||||
17 | Cl | 2,622.39 | 2,620.78 | 2,815.6 | ||||||
18 | Ar | 2,957.70 | 2,955.63 | 3,190.5 | ||||||
19 | K | 3,313.8 | 3,311.1 | 3,589.6 | ||||||
20 | Ca | 3,691.68 | 3,688.09 | 4,012.7 | 341.3 | 341.3 | 344.9 | |||
21 | Sc | 4,090.6 | 4,086.1 | 4,460.5 | 395.4 | 395.4 | 399.6 | |||
Z | Element | Kα1 | Kα2 | Kβ1 | Lα1 | Lα2 | Lβ1 | Lβ2 | Lγ1 | Mα1 |
22 | Ti | 4,510.84 | 4,504.86 | 4,931.81 | 452.2 | 452.2 | 458.4 | |||
23 | V | 4,952.20 | 4,944.64 | 5,427.29 | 511.3 | 511.3 | 519.2 | |||
24 | Cr | 5,414.72 | 5,405.509 | 5,946.71 | 572.8 | 572.8 | 582.8 | |||
25 | Mn | 5,898.75 | 5,887.65 | 6,490.45 | 637.4 | 637.4 | 648.8 | |||
26 | Fe | 6,403.84 | 6,390.84 | 7,057.98 | 705.0 | 705.0 | 718.5 | |||
27 | Co | 6,930.32 | 6,915.30 | 7,649.43 | 776.2 | 776.2 | 791.4 | |||
28 | Ni | 7,478.15 | 7,460.89 | 8,264.66 | 851.5 | 851.5 | 868.8 | |||
29 | Cu | 8,047.78 | 8,027.83 | 8,905.29 | 929.7 | 929.7 | 949.8 | |||
30 | Zn | 8,638.86 | 8,615.78 | 9,572.0 | 1,011.7 | 1,011.7 | 1,034.7 | |||
31 | Ga | 9,251.74 | 9,224.82 | 10,264.2 | 1,097.92 | 1,097.92 | 1,124.8 | |||
32 | Ge | 9,886.42 | 9,855.32 | 10,982.1 | 1,188.00 | 1,188.00 | 1,218.5 | |||
33 | As | 10,543.72 | 10,507.99 | 11,726.2 | 1,282.0 | 1,282.0 | 1,317.0 | |||
34 | Se | 11,222.4 | 11,181.4 | 12,495.9 | 1,379.10 | 1,379.10 | 1,419.23 | |||
35 | Br | 11,924.2 | 11,877.6 | 13,291.4 | 1,480.43 | 1,480.43 | 1,525.90 | |||
36 | Kr | 12,649 | 12,598 | 14,112 | 1,586.0 | 1,586.0 | 1,636.6 | |||
37 | Rb | 13,395.3 | 13,335.8 | 14,961.3 | 1,694.13 | 1,692.56 | 1,752.17 | |||
38 | Sr | 14,165 | 14,097.9 | 15,835.7 | 1,806.56 | 1,804.74 | 1,871.72 | |||
39 | Y | 14,958.4 | 14,882.9 | 16,737.8 | 1,922.56 | 1,920.47 | 1,995.84 | |||
40 | Zr | 15,775.1 | 15,690.9 | 17,667.8 | 2,042.36 | 2,039.9 | 2,124.4 | 2,219.4 | 2,302.7 | |
41 | Nb | 16,615.1 | 16,521.0 | 18,622.5 | 2,165.89 | 2,163.0 | 2,257.4 | 2,367.0 | 2,461.8 | |
42 | Mo | 17,479.34 | 17,374.3 | 19,608.3 | 2,293.16 | 2,289.85 | 2,394.81 | 2,518.3 | 2,623.5 | |
43 | Tc | 18,367.1 | 18,250.8 | 20,619 | 2,424 | 2,420 | 2,538 | 2,674 | 2,792 | |
44 | Ru | 19,279.2 | 19,150.4 | 21,656.8 | 2,558.55 | 2,554.31 | 2,683.23 | 2,836.0 | 2,964.5 | |
45 | Rh | 20,216.1 | 20,073.7 | 22,723.6 | 2,696.74 | 2,692.05 | 2,834.41 | 3,001.3 | 3,143.8 | |
46 | Pd | 21,177.1 | 21,020.1 | 23,818.7 | 2,838.61 | 2,833.29 | 2,990.22 | 3,171.79 | 3,328.7 | |
47 | Ag | 22,162.92 | 21,990.3 | 24,942.4 | 2,984.31 | 2,978.21 | 3,150.94 | 3,347.81 | 3,519.59 | |
48 | Cd | 23,173.6 | 22,984.1 | 26,095.5 | 3,133.73 | 3,126.91 | 3,316.57 | 3,528.12 | 3,716.86 | |
49 | In | 24,209.7 | 24,002.0 | 27,275.9 | 3,286.94 | 3,279.29 | 3,487.21 | 3,713.81 | 3,920.81 | |
50 | Sn | 25,271.3 | 25,044.0 | 28,486.0 | 3,443.98 | 3,435.42 | 3,662.80 | 3,904.86 | 4,131.12 | |
51 | Sb | 26,359.1 | 26,110.8 | 29,725.6 | 3,604.72 | 3,595.32 | 3,843.57 | 4,100.78 | 4,347.79 | |
52 | Te | 27,472.3 | 27,201.7 | 30,995.7 | 3,769.33 | 3,758.8 | 4,029.58 | 4,301.7 | 4,570.9 | |
53 | I | 28,612.0 | 28,317.2 | 32,294.7 | 3,937.65 | 3,926.04 | 4,220.72 | 4,507.5 | 4,800.9 | |
54 | Xe | 29,779 | 29,458 | 33,624 | 4,109.9 | — | — | — | — | |
55 | Cs | 30,972.8 | 30,625.1 | 34,986.9 | 4,286.5 | 4,272.2 | 4,619.8 | 4,935.9 | 5,280.4 | |
56 | Ba | 32,193.6 | 31,817.1 | 36,378.2 | 4,466.26 | 4,450.90 | 4,827.53 | 5,156.5 | 5,531.1 | |
57 | La | 33,441.8 | 33,034.1 | 37,801.0 | 4,650.97 | 4,634.23 | 5,042.1 | 5,383.5 | 5,788.5 | 833 |
58 | Ce | 34,719.7 | 34,278.9 | 39,257.3 | 4,840.2 | 4,823.0 | 5,262.2 | 5,613.4 | 6,052 | 883 |
59 | Pr | 36,026.3 | 35,550.2 | 40,748.2 | 5,033.7 | 5,013.5 | 5,488.9 | 5,850 | 6,322.1 | 929 |
60 | Nd | 37,361.0 | 36,847.4 | 42,271.3 | 5,230.4 | 5,207.7 | 5,721.6 | 6,089.4 | 6,602.1 | 978 |
61 | Pm | 38,724.7 | 38,171.2 | 43,826 | 5,432.5 | 5,407.8 | 5,961 | 6,339 | 6,892 | — |
62 | Sm | 40,118.1 | 39,522.4 | 45,413 | 5,636.1 | 5,609.0 | 6,205.1 | 6,586 | 7,178 | 1,081 |
Z | Element | Kα1 | Kα2 | Kβ1 | Lα1 | Lα2 | Lβ1 | Lβ2 | Lγ1 | Mα1 |
63 | Eu | 41,542.2 | 40,901.9 | 47,037.9 | 5,845.7 | 5,816.6 | 6,456.4 | 6,843.2 | 7,480.3 | 1,131 |
64 | Gd | 42,996.2 | 42,308.9 | 48,697 | 6,057.2 | 6,025.0 | 6,713.2 | 7,102.8 | 7,785.8 | 1,185 |
65 | Tb | 44,481.6 | 43,744.1 | 50,382 | 6,272.8 | 6,238.0 | 6,978 | 7,366.7 | 8,102 | 1,240 |
66 | Dy | 45,998.4 | 45,207.8 | 52,119 | 6,495.2 | 6,457.7 | 7,247.7 | 7,635.7 | 8,418.8 | 1,293 |
67 | Ho | 47,546.7 | 46,699.7 | 53,877 | 6,719.8 | 6,679.5 | 7,525.3 | 7,911 | 8,747 | 1,348 |
68 | Er | 49,127.7 | 48,221.1 | 55,681 | 6,948.7 | 6,905.0 | 7,810.9 | 8,189.0 | 9,089 | 1,406 |
69 | Tm | 50,741.6 | 49,772.6 | 57,517 | 7,179.9 | 7,133.1 | 8,101 | 8,468 | 9,426 | 1,462 |
70 | Yb | 52,388.9 | 51,354.0 | 59,370 | 7,415.6 | 7,367.3 | 8,401.8 | 8,758.8 | 9,780.1 | 1,521.4 |
71 | Lu | 54,069.8 | 52,965.0 | 61,283 | 7,655.5 | 7,604.9 | 8,709.0 | 9,048.9 | 10,143.4 | 1,581.3 |
72 | Hf | 55,790.2 | 54,611.4 | 63,234 | 7,899.0 | 7,844.6 | 9,022.7 | 9,347.3 | 10,515.8 | 1,644.6 |
73 | Ta | 57,532 | 56,277 | 65,223 | 8,146.1 | 8,087.9 | 9,343.1 | 9,651.8 | 10,895.2 | 1,710 |
74 | W | 59,318.24 | 57,981.7 | 67,244.3 | 8,397.6 | 8,335.2 | 9,672.35 | 9,961.5 | 11,285.9 | 1,775.4 |
75 | Re | 61,140.3 | 59,717.9 | 69,310 | 8,652.5 | 8,586.2 | 10,010.0 | 10,275.2 | 11,685.4 | 1,842.5 |
76 | Os | 63,000.5 | 61,486.7 | 71,413 | 8,911.7 | 8,841.0 | 10,355.3 | 10,598.5 | 12,095.3 | 1,910.2 |
77 | Ir | 64,895.6 | 63,286.7 | 73,560.8 | 9,175.1 | 9,099.5 | 10,708.3 | 10,920.3 | 12,512.6 | 1,979.9 |
78 | Pt | 66,832 | 65,112 | 75,748 | 9,442.3 | 9,361.8 | 11,070.7 | 11,250.5 | 12,942.0 | 2,050.5 |
79 | Au | 68,803.7 | 66,989.5 | 77,984 | 9,713.3 | 9,628.0 | 11,442.3 | 11,584.7 | 13,381.7 | 2,122.9 |
80 | Hg | 70,819 | 68,895 | 80,253 | 9,988.8 | 9,897.6 | 11,822.6 | 11,924.1 | 13,830.1 | 2,195.3 |
81 | Tl | 72,871.5 | 70,831.9 | 82,576 | 10,268.5 | 10,172.8 | 12,213.3 | 12,271.5 | 14,291.5 | 2,270.6 |
82 | Pb | 74,969.4 | 72,804.2 | 84,936 | 10,551.5 | 10,449.5 | 12,613.7 | 12,622.6 | 14,764.4 | 2,345.5 |
83 | Bi | 77,107.9 | 74,814.8 | 87,343 | 10,838.8 | 10,730.91 | 13,023.5 | 12,979.9 | 15,247.7 | 2,422.6 |
84 | Po | 79,290 | 76,862 | 89,800 | 11,130.8 | 11,015.8 | 13,447 | 13,340.4 | 15,744 | — |
85 | At | 81,520 | 78,950 | 92,300 | 11,426.8 | 11,304.8 | 13,876 | — | 16,251 | — |
86 | Rn | 83,780 | 81,070 | 94,870 | 11,727.0 | 11,597.9 | 14,316 | — | 16,770 | — |
87 | Fr | 86,100 | 83,230 | 97,470 | 12,031.3 | 11,895.0 | 14,770 | 14,450 | 17,303 | — |
88 | Ra | 88,470 | 85,430 | 100,130 | 12,339.7 | 12,196.2 | 15,235.8 | 14,841.4 | 17,849 | — |
89 | Ac | 90,884 | 87,670 | 102,850 | 12,652.0 | 12,500.8 | 15,713 | — | 18,408 | — |
90 | Th | 93,350 | 89,953 | 105,609 | 12,968.7 | 12,809.6 | 16,202.2 | 15,623.7 | 18,982.5 | 2,996.1 |
91 | Pa | 95,868 | 92,287 | 108,427 | 13,290.7 | 13,122.2 | 16,702 | 16,024 | 19,568 | 3,082.3 |
92 | U | 98,439 | 94,665 | 111,300 | 13,614.7 | 13,438.8 | 17,220.0 | 16,428.3 | 20,167.1 | 3,170.8 |
93 | Np | — | — | — | 13,944.1 | 13,759.7 | 17,750.2 | 16,840.0 | 20,784.8 | — |
94 | Pu | — | — | — | 14,278.6 | 14,084.2 | 18,293.7 | 17,255.3 | 21,417.3 | — |
95 | Am | — | — | — | 14,617.2 | 14,411.9 | 18,852.0 | 17,676.5 | 22,065.2 | — |
4.2. Atomic binding energies
X-Ray Data Booklet Table 1-1. Electron binding energies, in electron volts, for the elements in their natural forms. https://xdb.lbl.gov/Section1/Table_1-1.pdf
Z | Element | K 1s | L1 2s | L2 2p1/2 | L3 2p3/2 | M1 3s | M2 3p1/2 | M3 3p3/2 | M4 3d3/2 | M5 3d5/2 | N1 4s | N2 4p1/2 | N3 4p3/2 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | H | 13.6 | |||||||||||
2 | He | 24.6* | |||||||||||
3 | Li | 54.7* | |||||||||||
4 | Be | 111.5* | |||||||||||
5 | B | 188* | |||||||||||
6 | C | 284.2* | |||||||||||
7 | N | 409.9* | 37.3* | ||||||||||
8 | O | 543.1* | 41.6* | ||||||||||
9 | F | 696.7* | |||||||||||
10 | Ne | 870.2* | 48.5* | 21.7* | 21.6* | ||||||||
11 | Na | 1070.8† | 63.5† | 30.65 | 30.81 | ||||||||
12 | Mg | 1303.0† | 88.7 | 49.78 | 49.50 | ||||||||
13 | Al | 1559.6 | 117.8 | 72.95 | 72.55 | ||||||||
14 | Si | 1839 | 149.7*b | 99.82 | 99.42 | ||||||||
15 | P | 2145.5 | 189* | 136* | 135* | ||||||||
16 | S | 2472 | 230.9 | 163.6* | 162.5* | ||||||||
17 | Cl | 2822.4 | 270* | 202* | 200* | ||||||||
18 | Ar | 3205.9* | 326.3* | 250.6† | 248.4* | 29.3* | 15.9* | 15.7* | |||||
19 | K | 3608.4* | 378.6* | 297.3* | 294.6* | 34.8* | 18.3* | 18.3* | |||||
20 | Ca | 4038.5* | 438.4† | 349.7† | 346.2† | 44.3 | † | 25.4† | 25.4† | ||||
21 | Sc | 4492 | 498.0* | 403.6* | 398.7* | 51.1* | 28.3* | 28.3* | |||||
22 | Ti | 4966 | 560.9† | 460.2† | 453.8† | 58.7† | 32.6† | 32.6† | |||||
23 | V | 5465 | 626.7† | 519.8† | 512.1† | 66.3† | 37.2† | 37.2† | |||||
24 | Cr | 5989 | 696.0† | 583.8† | 574.1† | 74.1† | 42.2† | 42.2† | |||||
25 | Mn | 6539 | 769.1† | 649.9† | 638.7† | 82.3† | 47.2† | 47.2† | |||||
26 | Fe | 7112 | 844.6† | 719.9† | 706.8† | 91.3† | 52.7† | 52.7† | |||||
27 | Co | 7709 | 925.1† | 793.2† | 778.1† | 101.0† | 58.9† | 59.9† | |||||
28 | Ni | 8333 | 1008.6† | 870.0† | 852.7† | 110.8† | 68.0† | 66.2† | |||||
29 | Cu | 8979 | 1096.7† | 952.3† | 932.7 | 122.5† | 77.3† | 75.1† | |||||
30 | Zn | 9659 | 1196.2* | 1044.9* | 1021.8* | 139.8* | 91.4* | 88.6* | 10.2* | 10.1* | |||
31 | Ga | 10367 | 1299.0*b | 1143.2† | 1116.4† | 159.5† | 103.5† | 100.0† | 18.7† | 18.7† | |||
32 | Ge | 11103 | 1414.6*b | 1248.1*b | 1217.0*b | 180.1* | 124.9* | 120.8* | 29.8 | 29.2 | |||
33 | As | 11867 | 1527.0*b | 1359.1*b | 1323.6*b | 204.7* | 146.2* | 141.2* | 41.7* | 41.7* | |||
34 | Se | 12658 | 1652.0*b | 1474.3*b | 1433.9*b | 229.6* | 166.5* | 160.7* | 55.5* | 54.6* | |||
35 | Br | 13474 | 1782* | 1596* | 1550* | 257* | 189* | 182* | 70* | 69* | |||
36 | Kr | 14326 | 1921 | 1730.9* | 1678.4* | 292.8* | 222.2* | 214.4 | 95.0* | 93.8* | 27.5* | 14.1* | 14.1* |
37 | Rb | 15200 | 2065 | 1864 | 1804 | 326.7* | 248.7* | 239.1* | 113.0* | 112* | 30.5* | 16.3* | 15.3* |
38 | Sr | 16105 | 2216 | 2007 | 1940 | 358.7† | 280.3† | 270.0† | 136.0† | 134.2† | 38.9† | 21.3 | 20.1† |
39 | Y | 17038 | 2373 | 2156 | 2080 | 392.0*b | 310.6* | 298.8* | 157.7† | 155.8† | 43.8* | 24.4* | 23.1* |
40 | Zr | 17998 | 2532 | 2307 | 2223 | 430.3† | 343.5† | 329.8† | 181.1† | 178.8† | 50.6† | 28.5† | 27.1† |
41 | Nb | 18986 | 2698 | 2465 | 2371 | 466.6† | 376.1† | 360.6† | 205.0† | 202.3† | 56.4† | 32.6† | 30.8† |
42 | Mo | 20000 | 2866 | 2625 | 2520 | 506.3† | 411.6† | 394.0† | 231.1† | 227.9† | 63.2† | 37.6† | 35.5† |
43 | Tc | 21044 | 3043 | 2793 | 2677 | 544* | 447.6 | 417.7 | 257.6 | 253.9* | 69.5* | 42.3* | 39.9* |
44 | Ru | 22117 | 3224 | 2967 | 2838 | 586.1* | 483.5† | 461.4† | 284.2† | 280.0† | 75.0† | 46.3† | 43.2† |
45 | Rh | 23220 | 3412 | 3146 | 3004 | 628.1† | 521.3† | 496.5† | 311.9† | 307.2† | 81.4*b | 50.5† | 47.3† |
46 | Pd | 24350 | 3604 | 3330 | 3173 | 671.6† | 559.9† | 532.3† | 340.5† | 335.2† | 87.1*b | 55.7†a | 50.9† |
47 | Ag | 25514 | 3806 | 3524 | 3351 | 719.0† | 603.8† | 573.0† | 374.0† | 368.3 | 97.0† | 63.7† | 58.3† |
Z | Element | K 1s | L1 2s | L2 2p1/2 | L3 2p3/2 | M1 3s | M2 3p1/2 | M3 3p3/2 | M4 3d3/2 | M5 3d5/2 | N 4s | N2 4p1/2 | N3 4p3/2 |
48 | Cd | 26711 | 4018 | 3727 | 3538 | 772.0† | 652.6† | 618.4† | 411.9† | 405.2† | 109.8† | 63.9†a | 63.9†a |
49 | In | 27940 | 4238 | 3938 | 3730 | 827.2† | 703.2† | 665.3† | 451.4† | 443.9† | 122.9† | 73.5†a | 73.5†a |
50 | Sn | 29200 | 4465 | 4156 | 3929 | 884.7† | 756.5† | 714.6† | 493.2† | 484.9† | 137.1† | 83.6†a | 83.6†a |
51 | Sb | 30491 | 4698 | 4380 | 4132 | 946† | 812.7† | 766.4† | 537.5† | 528.2† | 153.2† | 95.6†a | 95.6†a |
52 | Te | 31814 | 4939 | 4612 | 4341 | 1006† | 870.8† | 820.0† | 583.4† | 573.0† | 169.4† | 103.3†a | 103.3†a |
53 | I | 33169 | 5188 | 4852 | 4557 | 1072* | 931* | 875* | 630.8 | 619.3 | 186* | 123* | 123* |
54 | Xe | 34561 | 5453 | 5107 | 4786 | 1148.7* | 1002.1* | 940.6* | 689.0* | 676.4* | 213.2* | 146.7 | 145.5* |
55 | Cs | 35985 | 5714 | 5359 | 5012 | 1211*b | 1071* | 1003* | 740.5* | 726.6* | 232.3* | 172.4* | 161.3* |
56 | Ba | 37441 | 5989 | 5624 | 5247 | 1293*b | 1137*b | 1063*b | 795.7† | 780.5* | 253.5† | 192 | 178.6† |
57 | La | 38925 | 6266 | 5891 | 5483 | 1362*b | 1209*b | 1128*b | 853* | 836* | 274.7* | 205.8 | 196.0* |
58 | Ce | 40443 | 6549 | 6164 | 5723 | 1436*b | 1274*b | 1187*b | 902.4* | 883.8* | 291.0* | 223.2 | 206.5* |
59 | Pr | 41991 | 6835 | 6440 | 5964 | 1511 | 1337 | 1242 | 948.3* | 928.8* | 304.5 | 236.3 | 217.6 |
60 | Nd | 43569 | 7126 | 6722 | 6208 | 1575 | 1403 | 1297 | 1003.3* | 980.4* | 319.2* | 243.3 | 224.6 |
61 | Pm | 45184 | 7428 | 7013 | 6459 | --- | 1471 | 1357 | 1052 | 1027 | --- | 242 | 242 |
62 | Sm | 46834 | 7737 | 7312 | 6716 | 1723 | 1541 | 1420 | 1110.9* | 1083.4* | 347.2* | 265.6 | 247.4 |
63 | Eu | 48519 | 8052 | 7617 | 6977 | 1800 | 1614 | 1481 | 1158.6* | 1127.5* | 360 | 284 | 257 |
64 | Gd | 50239 | 8376 | 7930 | 7243 | 1881 | 1688 | 1544 | 1221.9* | 1189.6* | 378.6* | 286 | 271 |
65 | Tb | 51996 | 8708 | 8252 | 7514 | 1968 | 1768 | 1611 | 1276.9* | 1241.1* | 396.0* | 322.4* | 284.1* |
66 | Dy | 53789 | 9046 | 8581 | 7790 | 2047 | 1842 | 1676 | 1333 | 1292.6* | 414.2* | 333.5* | 293.2* |
67 | Ho | 55618 | 9394 | 8918 | 8071 | 2128 | 1923 | 1741 | 1392 | 1351 | 432.4* | 343.5 | 308.2* |
68 | Er | 57486 | 9751 | 9264 | 8358 | 2207 | 2006 | 1812 | 1453 | 1409 | 449.8* | 366.2 | 320.2* |
69 | Tm | 59390 | 10116 | 9617 | 8648 | 2307 | 2090 | 1885 | 1515 | 1468 | 470.9* | 385.9* | 332.6* |
70 | Yb | 61332 | 10486 | 9978 | 8944 | 2398 | 2173 | 1950 | 1576 | 1528 | 480.5* | 388.7* | 339.7* |
Z | Element | N4 4d3/2 | N5 4d5/2 | N6 4f5/2 | N7 4f7/2 | O1 5s | O2 5p1/2 | O3 5p3/2 | O4 5d3/2 | O5 5d5/2 | P1 6s | P2 6p1/2 | P3 6p3/2 |
48 | Cd | 11.7† | l0.7† | ||||||||||
49 | In | 17.7† | 16.9† | ||||||||||
50 | Sn | 24.9† | 23.9† | ||||||||||
51 | Sb | 33.3† | 32.1† | ||||||||||
52 | Te | 41.9† | 40.4† | ||||||||||
53 | I | 50.6 | 48.9 | ||||||||||
54 | Xe | 69.5* | 67.5* | --- | --- | 23.3* | 13.4* | 12.1* | |||||
55 | Cs | 79.8* | 77.5* | --- | --- | 22.7 | 14.2* | 12.1* | |||||
56 | Ba | 92.6† | 89.9† | --- | --- | 30.3† | 17.0† | 14.8† | |||||
57 | La | 105.3* | 102.5* | --- | --- | 34.3* | 19.3* | 16.8* | |||||
58 | Ce | 109* | --- | 0.1 | 0.1 | 37.8 | 19.8* | 17.0* | |||||
59 | Pr | 115.1* | 115.1* | 2.0 | 2.0 | 37.4 | 22.3 | 22.3 | |||||
60 | Nd | 120.5* | 120.5* | 1.5 | 1.5 | 37.5 | 21.1 | 21.1 | |||||
61 | Pm | 120 | 120 | --- | --- | --- | --- | --- | |||||
62 | Sm | 129 | 129 | 5.2 | 5.2 | 37.4 | 21.3 | 21.3 | |||||
63 | Eu | 133 | 127.7* | 0 | 0 | 32 | 22 | 22 | |||||
64 | Gd | --- | 142.6* | 8.6* | 8.6* | 36 | 28 | 21 | |||||
65 | Tb | 150.5* | 150.5* | 7.7* | 2.4* | 45.6* | 28.7* | 22.6* | |||||
66 | Dy | 153.6* | 153.6* | 8.0* | 4.3* | 49.9* | 26.3 | 26.3 | |||||
67 | Ho | 160* | 160* | 8.6* | 5.2* | 49.3* | 30.8* | 24.1* | |||||
68 | Er | 167.6* | 167.6* | --- | 4.7* | 50.6* | 31.4* | 24.7* | |||||
69 | Tm | 175.5* | 175.5* | --- | 4.6 | 54.7* | 31.8* | 25.0* | |||||
70 | Yb | 191.2* | 182.4* | 2.5* | 1.3* | 52.0* | 30.3* | 24.1* | |||||
Z | Element | K 1s | L1 2s | L2 2p1/2 | L3 2p3/2 | M1 3s | M2 3p1/2 | M3 3p3/2 | M4 3d3/2 | M5 3d5/2 | N1 4s | N2 4p1/2 | N3 4p3/2 |
71 | Lu | 63314 | 10870 | 10349 | 9244 | 2491 | 2264 | 2024 | 1639 | 1589 | 506.8* | 412.4* | 359.2* |
72 | Hf | 65351 | 11271 | 10739 | 9561 | 2601 | 2365 | 2108 | 1716 | 1662 | 538* | 438.2† | 380.7† |
73 | Ta | 67416 | 11682 | 11136 | 9881 | 2708 | 2469 | 2194 | 1793 | 1735 | 563.4† | 463.4† | 400.9† |
74 | W | 69525 | 12100 | 11544 | 10207 | 2820 | 2575 | 2281 | 1872 | 1809 | 594.1† | 490.4† | 423.6† |
75 | Re | 71676 | 12527 | 11959 | 10535 | 2932 | 2682 | 2367 | 1949 | 1883 | 625.4† | 518.7† | 446.8† |
76 | Os | 73871 | 12968 | 12385 | 10871 | 3049 | 2792 | 2457 | 2031 | 1960 | 658.2† | 549.1† | 470.7† |
77 | Ir | 76111 | 13419 | 12824 | 11215 | 3174 | 2909 | 2551 | 2116 | 2040 | 691.1† | 577.8† | 495.8† |
78 | Pt | 78395 | 13880 | 13273 | 11564 | 3296 | 3027 | 2645 | 2202 | 2122 | 725.4† | 609.1† | 519.4† |
79 | Au | 80725 | 14353 | 13734 | 11919 | 3425 | 3148 | 2743 | 2291 | 2206 | 762.1† | 642.7† | 546.3† |
80 | Hg | 83102 | 14839 | 14209 | 12284 | 3562 | 3279 | 2847 | 2385 | 2295 | 802.2† | 680.2† | 576.6† |
81 | Tl | 85530 | 15347 | 14698 | 12658 | 3704 | 3416 | 2957 | 2485 | 2389 | 846.2† | 720.5† | 609.5† |
82 | Pb | 88005 | 15861 | 15200 | 13035 | 3851 | 3554 | 3066 | 2586 | 2484 | 891.8† | 761.9† | 643.5† |
83 | Bi | 90524 | 16388 | 15711 | 13419 | 3999 | 3696 | 3177 | 2688 | 2580 | 939† | 805.2† | 678.8† |
84 | Po | 93105 | 16939 | 16244 | 13814 | 4149 | 3854 | 3302 | 2798 | 2683 | 995* | 851* | 705* |
85 | At | 95730 | 17493 | 16785 | 14214 | 4317 | 4008 | 3426 | 2909 | 2787 | 1042* | 886* | 740* |
86 | Rn | 98404 | 18049 | 17337 | 14619 | 4482 | 4159 | 3538 | 3022 | 2892 | 1097* | 929* | 768* |
87 | Fr | 101137 | 18639 | 17907 | 15031 | 4652 | 4327 | 3663 | 3136 | 3000 | 1153* | 980* | 810* |
88 | Ra | 103922 | 19237 | 18484 | 15444 | 4822 | 4490 | 3792 | 3248 | 3105 | 1208* | 1058 | 879* |
89 | Ac | 106755 | 19840 | 19083 | 15871 | 5002 | 4656 | 3909 | 3370 | 3219 | 1269* | 1080* | 890* |
90 | Th | 109651 | 20472 | 19693 | 16300 | 5182 | 4830 | 4046 | 3491 | 3332 | 1330* | 1168* | 966.4† |
91 | Pa | 112601 | 21105 | 20314 | 16733 | 5367 | 5001 | 4174 | 3611 | 3442 | 1387* | 1224* | 1007* |
92 | U | 115606 | 21757 | 20948 | 17166 | 5548 | 5182 | 4303 | 3728 | 3552 | 1439*b | 1271*b | 1043† |
Z | Element | N4 4d3/2 | N5 4d5/2 | N6 4f5/2 | N7 4f7/2 | O1 5s | O2 5p1/2 | O3 5p3/2 | O4 5d3/2 | O5 5d5/2 | P1 6s | P2 6p1/2 | P3 6p3/2 |
71 | Lu | 206.1* | 196.3* | 8.9* | 7.5* | 57.3* | 33.6* | 26.7* | |||||
72 | Hf | 220.0† | 211.5† | 15.9† | 14.2† | 64.2† | 38* | 29.9† | |||||
73 | Ta | 237.9† | 226.4† | 23.5† | 21.6† | 69.7† | 42.2* | 32.7† | |||||
74 | W | 255.9† | 243.5† | 33.6* | 31.4† | 75.6† | 45.3*b | 36.8† | |||||
75 | Re | 273.9† | 260.5† | 42.9* | 40.5* | 83† | 45.6* | 34.6*b | |||||
76 | Os | 293.1† | 278.5† | 53.4† | 50.7† | 84* | 58* | 44.5† | |||||
77 | Ir | 311.9† | 296.3† | 63.8† | 60.8† | 95.2*b | 63.0*b | 48.0† | |||||
78 | Pt | 331.6† | 314.6† | 74.5† | 71.2† | 101.7*b | 65.3*b | 51.7† | |||||
79 | Au | 353.2† | 335.1† | 87.6† | 84.0 | 107.2*b | 74.2† | 57.2† | |||||
80 | Hg | 378.2† | 358.8† | 104.0† | 99.9† | 127† | 83.1† | 64.5† | 9.6† | 7.8† | |||
81 | Tl | 405.7† | 385.0† | 122.2† | 117.8† | 136.0*b | 94.6† | 73.5† | 14.7† | 12.5† | |||
82 | Pb | 434.3† | 412.2† | 141.7† | 136.9† | 147*b | 106.4† | 83.3† | 20.7† | 18.1† | |||
83 | Bi | 464.0† | 440.1† | 162.3† | 157.0† | 159.3*b | 119.0† | 92.6† | 26.9† | 23.8† | |||
84 | Po | 500* | 473* | 184* | 184* | 177* | 132* | 104* | 31* | 31* | |||
85 | At | 533* | 507 | 210* | 210* | 195* | 148* | 115* | 40* | 40* | |||
86 | Rn | 567* | 541* | 238* | 238* | 214* | 164* | 127* | 48* | 48* | 26 | ||
87 | Fr | 603* | 577* | 268* | 268* | 234* | 182* | 140* | 58* | 58* | 34 | 15 | 15 |
88 | Ra | 636* | 603* | 299* | 299* | 254* | 200* | 153* | 68* | 68* | 44 | 19 | 19 |
89 | Ac | 675* | 639* | 319* | 319* | 272* | 215* | 167* | 80* | 80* | --- | --- | --- |
90 | Th | 712.1† | 675.2† | 342.4† | 333.1† | 290*a | 229*a | 182*a | 92.5† | 85.4† | 41.4† | 24.5† | 16.6† |
91 | Pa | 743* | 708* | 371* | 360* | 310* | 232* | 232* | 94* | 94* | --- | --- | --- |
92 | U | 778.3† | 736.2† | 388.2* | 377.4† | 321*ab | 257*ab | 192*ab | 102.8† | 94.2† | 43.9† | 26.8† | 16.8† |
4.3. X-ray fluorescence line intensities
Ref:
X-Ray Data Booklet Table 1-3. Photon energies and relative intensities of K-, L-, and M-shell lines shown in Fig. 1-1, arranged by increasing energy. An intensity of 100 is assigned to the strongest line in each shell for each element.
Energy [eV] | Z | Element | Line | Intensity |
---|---|---|---|---|
54.3 | 3 | Li | Kα1,2 | 150 |
108.5 | 4 | Be | Kα1,2 | 150 |
183.3 | 5 | B | Kα1,2 | 151 |
277 | 6 | C | Kα1,2 | 147 |
348.3 | 21 | Sc | Ll | 21 |
392.4 | 7 | N | Kα1,2 | 150 |
395.3 | 22 | Ti | Ll | 46 |
395.4 | 21 | Sc | Lα1,2 | 111 |
399.6 | 21 | Sc | Lβ1 | 77 |
446.5 | 23 | V | Ll | 28 |
452.2 | 22 | Ti | Lα1,2 | 111 |
458.4 | 22 | Ti | Lβ1 | 79 |
500.3 | 24 | Cr | Ll | 17 |
511.3 | 23 | V | Lα1,2 | 111 |
519.2 | 23 | V | Lβ1 | 80 |
524.9 | 8 | O | Kα1,2 | 151 |
556.3 | 25 | Mn | Ll | 15 |
572.8 | 24 | Cr | Lα1,2 | 111 |
582.8 | 24 | Cr | Lβ1 | 79 |
615.2 | 26 | Fe | Ll | 10 |
637.4 | 25 | Mn | Lα1,2 | 111 |
648.8 | 25 | Mn | Lβ1 | 77 |
676.8 | 9 | F | Kα1,2 | 148 |
677.8 | 27 | Co | Ll | 10 |
705.0 | 26 | Fe | Lα1,2 | 111 |
718.5 | 26 | Fe | Lβ1 | 66 |
742.7 | 28 | Ni | Ll | 9 |
776.2 | 27 | Co | Lα1,2 | 111 |
791.4 | 27 | Co | Lβ1 | 76 |
811.1 | 29 | Cu | Ll | 8 |
833 | 57 | La | Mα1 | 100 |
848.6 | 10 | Ne | Kα1,2 | 150 |
851.5 | 28 | Ni | Lα1,2 | 111 |
868.8 | 28 | Ni | Lβ1 | 68 |
883 | 58 | Ce | Mα1 | 100 |
884 | 30 | Zn | Ll | 7 |
929.2 | 59 | Pr | Mα1 | 100 |
929.7 | 29 | Cu | Lα1,2 | 111 |
949.8 | 29 | Cu | Lβ1 | 65 |
957.2 | 31 | Ga | Ll | 7 |
978 | 60 | Nd | Mα1 | 100 |
1011.7 | 30 | Zn | Lα1,2 | 111 |
1034.7 | 30 | Zn | Lβ1 | 65 |
1036.2 | 32 | Ge | Ll | 6 |
1041.0 | 11 | Na | Kα1,2 | 150 |
1081 | 62 | Sm | Mα1 | 100 |
1097.9 | 31 | Ga | Lα1,2 | 111 |
1120 | 33 | As | Ll | 6 |
1124.8 | 31 | Ga | Lβ1 | 66 |
1131 | 63 | Eu | Mα1 | 100 |
1185 | 64 | Gd | Mα1 | 100 |
1188.0 | 32 | Ge | Lα1,2 | 111 |
1204.4 | 34 | Se | Ll | 6 |
1218.5 | 32 | Ge | Lβ1 | 60 |
1240 | 65 | Tb | Mα1 | 100 |
1253.6 | 12 | Mg | Kα1,2 | 150 |
1282.0 | 33 | As | Lα1,2 | 111 |
1293 | 66 | Dy | Mα1 | 100 |
1293.5 | 35 | Br | Ll | 5 |
1317.0 | 33 | As | Lβ1 | 60 |
1348 | 67 | Ho | Mα1 | 100 |
1379.1 | 34 | Se | Lα1,2 | 111 |
1386 | 36 | Kr | Ll | 5 |
1406 | 68 | Er | Mα1 | 100 |
1419.2 | 34 | Se | Lβ1 | 59 |
1462 | 69 | Tm | Mα1 | 100 |
1480.4 | 35 | Br | Lα1,2 | 111 |
1482.4 | 37 | Rb | Ll | 5 |
1486.3 | 13 | Al | Kα2 | 50 |
1486.7 | 13 | Al | Kα1 | 100 |
1521.4 | 70 | Yb | Mα1 | 100 |
1525.9 | 35 | Br | Lβ1 | 59 |
1557.4 | 13 | Al | Kβ1 | 1 |
1581.3 | 71 | Lu | Mα1 | 100 |
1582.2 | 38 | Sr | Ll | 5 |
1586.0 | 36 | Kr | Lα1,2 | 111 |
1636.6 | 36 | Kr | Lβ1 | 57 |
1644.6 | 72 | Hf | Mα1 | 100 |
1685.4 | 39 | Y | Ll | 5 |
1692.6 | 37 | Rb | Lα2 | 11 |
1694.1 | 37 | Rb | Lα1 | 100 |
1709.6 | 73 | Ta | Mα1 | 100 |
1739.4 | 14 | Si | Kα2 | 50 |
1740.0 | 14 | Si | Kα1 | 100 |
1752.2 | 37 | Rb | Lβ1 | 58 |
1775.4 | 74 | W | Mα1 | 100 |
1792.0 | 40 | Zr | Ll | 5 |
1804.7 | 38 | Sr | Lα2 | 11 |
1806.6 | 38 | Sr | Lα1 | 100 |
1835.9 | 14 | Si | Kβ1 | 2 |
1842.5 | 75 | Re | Mα1 | 100 |
1871.7 | 38 | Sr | Lβ1 | 58 |
1902.2 | 41 | Nb | Ll | 5 |
1910.2 | 76 | Os | Mα1 | 100 |
1920.5 | 39 | Y | Lα2 | 11 |
1922.6 | 39 | Y | Lα1 | 100 |
1979.9 | 77 | Ir | Mα1 | 100 |
1995.8 | 39 | Y | Lβ1 | 57 |
2012.7 | 15 | P | Kα2 | 50 |
2013.7 | 15 | P | Kα1 | 100 |
2015.7 | 42 | Mo | Ll | 5 |
2039.9 | 40 | Zr | Lα2 | 11 |
2042.4 | 40 | Zr | Lα1 | 100 |
2050.5 | 78 | Pt | Mα1 | 100 |
2122 | 43 | Tc | Ll | 5 |
2122.9 | 79 | Au | Mα1 | 100 |
2124.4 | 40 | Zr | Lβ1 | 54 |
2139.1 | 15 | P | Kβ1 | 3 |
2163.0 | 41 | Nb | Lα2 | 11 |
2165.9 | 41 | Nb | Lα1 | 100 |
2195.3 | 80 | Hg | Mα1 | 100 |
2219.4 | 40 | Zr | Lβ2,15 | 1 |
2252.8 | 44 | Ru | Ll | 4 |
2257.4 | 41 | Nb | Lβ1 | 52 |
2270.6 | 81 | Tl | Mα1 | 100 |
2289.8 | 42 | Mo | Lα2 | 11 |
2293.2 | 42 | Mo | Lα1 | 100 |
2302.7 | 40 | Zr | Lγ1 | 2 |
2306.6 | 16 | S | Kα2 | 50 |
2307.8 | 16 | S | Kα1 | 100 |
2345.5 | 82 | Pb | Mα1 | 100 |
2367.0 | 41 | Nb | Lβ2,15 | 3 |
2376.5 | 45 | Rh | Ll | 4 |
2394.8 | 42 | Mo | Lβ1 | 53 |
2420 | 43 | Tc | Lα2 | 11 |
2422.6 | 83 | Bi | Mα1 | 100 |
2424 | 43 | Tc | Lα1 | 100 |
2461.8 | 41 | Nb | Lγ1 | 2 |
2464.0 | 16 | S | Kβ1 | 5 |
2503.4 | 46 | Pd | Ll | 4 |
2518.3 | 42 | Mo | Lβ2,15 | 5 |
2538 | 43 | Tc | Lβ1 | 54 |
2554.3 | 44 | Ru | Lα2 | 11 |
2558.6 | 44 | Ru | Lα1 | 100 |
2620.8 | 17 | Cl | Kα2 | 50 |
2622.4 | 17 | Cl | Kα1 | 100 |
2623.5 | 42 | Mo | Lγ1 | 3 |
2633.7 | 47 | Ag | Ll | 4 |
2674 | 43 | Tc | Lβ2,15 | 7 |
2683.2 | 44 | Ru | Lβ1 | 54 |
2692.0 | 45 | Rh | Lα2 | 11 |
2696.7 | 45 | Rh | Lα1 | 100 |
2767.4 | 48 | Cd | Ll | 4 |
2792 | 43 | Tc | Lγ1 | 3 |
2815.6 | 17 | Cl | Kβ1 | 6 |
2833.3 | 46 | Pd | Lα2 | 11 |
2834.4 | 45 | Rh | Lβ1 | 52 |
2836.0 | 44 | Ru | Lβ2,15 | 10 |
2838.6 | 46 | Pd | Lα1 | 100 |
2904.4 | 49 | In | Ll | 4 |
2955.6 | 18 | Ar | Kα2 | 50 |
2957.7 | 18 | Ar | Kα1 | 100 |
2964.5 | 44 | Ru | Lγ1 | 4 |
2978.2 | 47 | Ag | Lα2 | 11 |
2984.3 | 47 | Ag | Lα1 | 100 |
2990.2 | 46 | Pd | Lβ1 | 53 |
2996.1 | 90 | Th | Mα1 | 100 |
3001.3 | 45 | Rh | Lβ2,15 | 10 |
3045.0 | 50 | Sn | Ll | 4 |
3126.9 | 48 | Cd | Lα2 | 11 |
3133.7 | 48 | Cd | Lα1 | 100 |
3143.8 | 45 | Rh | Lγ1 | 5 |
3150.9 | 47 | Ag | Lβ1 | 56 |
3170.8 | 92 | U | Mα1 | 100 |
3171.8 | 46 | Pd | Lβ2,15 | 12 |
3188.6 | 51 | Sb | Ll | 4 |
3190.5 | 18 | Ar | Kβ1,3 | 10 |
3279.3 | 49 | In | Lα2 | 11 |
3286.9 | 49 | In | Lα1 | 100 |
3311.1 | 19 | K | Kα2 | 50 |
3313.8 | 19 | K | Kα1 | 100 |
3316.6 | 48 | Cd | Lβ1 | 58 |
3328.7 | 46 | Pd | Lγ1 | 6 |
3335.6 | 52 | Te | Ll | 4 |
3347.8 | 47 | Ag | Lβ2,15 | 13 |
3435.4 | 50 | Sn | Lα2 | 11 |
3444.0 | 50 | Sn | Lα1 | 100 |
3485.0 | 53 | I | Ll | 4 |
3487.2 | 49 | In | Lβ1 | 58 |
3519.6 | 47 | Ag | Lγ1 | 6 |
3528.1 | 48 | Cd | Lβ2,15 | 15 |
3589.6 | 19 | K | Kβ1,3 | 11 |
3595.3 | 51 | Sb | Lα2 | 11 |
3604.7 | 51 | Sb | Lα1 | 100 |
3636 | 54 | Xe | Ll | 4 |
3662.8 | 50 | Sn | Lβ1 | 60 |
3688.1 | 20 | Ca | Kα2 | 50 |
3691.7 | 20 | Ca | Kα1 | 100 |
3713.8 | 49 | In | Lβ2,15 | 15 |
3716.9 | 48 | Cd | Lγ1 | 6 |
3758.8 | 52 | Te | Lα2 | 11 |
3769.3 | 52 | Te | Lα1 | 100 |
3795.0 | 55 | Cs | Ll | 4 |
3843.6 | 51 | Sb | Lβ1 | 61 |
3904.9 | 50 | Sn | Lβ2,15 | 16 |
3920.8 | 49 | In | Lγ1 | 6 |
3926.0 | 53 | I | Lα2 | 11 |
3937.6 | 53 | I | Lα1 | 100 |
3954.1 | 56 | Ba | Ll | 4 |
4012.7 | 20 | Ca | Kβ1,3 | 13 |
4029.6 | 52 | Te | Lβ1 | 61 |
4086.1 | 21 | Sc | Kα2 | 50 |
4090.6 | 21 | Sc | Kα1 | 100 |
4093 | 54 | Xe | Lα2 | 11 |
4100.8 | 51 | Sb | Lβ2,15 | 17 |
4109.9 | 54 | Xe | Lα1 | 100 |
4124 | 57 | La | Ll | 4 |
4131.1 | 50 | Sn | Lγ1 | 7 |
4220.7 | 53 | I | Lβ1 | 61 |
4272.2 | 55 | Cs | Lα2 | 11 |
4286.5 | 55 | Cs | Lα1 | 100 |
4287.5 | 58 | Ce | Ll | 4 |
4301.7 | 52 | Te | Lβ2,15 | 18 |
4347.8 | 51 | Sb | Lγ1 | 8 |
4414 | 54 | Xe | Lβ1 | 60 |
4450.9 | 56 | Ba | Lα2 | 11 |
4453.2 | 59 | Pr | Ll | 4 |
4460.5 | 21 | Sc | Kβ1,3 | 15 |
4466.3 | 56 | Ba | Lα1 | 100 |
4504.9 | 22 | Ti | Kα2 | 50 |
4507.5 | 53 | I | Lβ2,15 | 19 |
4510.8 | 22 | Ti | Kα1 | 100 |
4570.9 | 52 | Te | Lγ1 | 8 |
4619.8 | 55 | Cs | Lβ1 | 61 |
4633.0 | 60 | Nd | Ll | 4 |
4634.2 | 57 | La | Lα2 | 11 |
4651.0 | 57 | La | Lα1 | 100 |
4714 | 54 | Xe | Lβ2,15 | 20 |
4800.9 | 53 | I | Lγ1 | 8 |
4809 | 61 | Pm | Ll | 4 |
4823.0 | 58 | Ce | Lα2 | 11 |
4827.5 | 56 | Ba | Lβ1 | 60 |
4840.2 | 58 | Ce | Lα1 | 100 |
4931.8 | 22 | Ti | Kβ1,3 | 15 |
4935.9 | 55 | Cs | Lβ2,15 | 20 |
4944.6 | 23 | V | Kα2 | 50 |
4952.2 | 23 | V | Kα1 | 100 |
4994.5 | 62 | Sm | Ll | 4 |
5013.5 | 59 | Pr | Lα2 | 11 |
5033.7 | 59 | Pr | Lα1 | 100 |
5034 | 54 | Xe | Lγ1 | 8 |
5042.1 | 57 | La | Lβ1 | 60 |
5156.5 | 56 | Ba | Lβ2,15 | 20 |
5177.2 | 63 | Eu | Ll | 4 |
5207.7 | 60 | Nd | Lα2 | 11 |
5230.4 | 60 | Nd | Lα1 | 100 |
5262.2 | 58 | Ce | Lβ1 | 61 |
5280.4 | 55 | Cs | Lγ1 | 8 |
5362.1 | 64 | Gd | Ll | 4 |
5383.5 | 57 | La | Lβ2,15 | 21 |
5405.5 | 24 | Cr | Kα2 | 50 |
5408 | 61 | Pm | Lα2 | 11 |
5414.7 | 24 | Cr | Kα1 | 100 |
5427.3 | 23 | V | Kβ1,3 | 15 |
5432 | 61 | Pm | Lα1 | 100 |
5488.9 | 59 | Pr | Lβ1 | 61 |
5531.1 | 56 | Ba | Lγ1 | 9 |
5546.7 | 65 | Tb | Ll | 4 |
5609.0 | 62 | Sm | Lα2 | 11 |
5613.4 | 58 | Ce | Lβ2,15 | 21 |
5636.1 | 62 | Sm | Lα1 | 100 |
5721.6 | 60 | Nd | Lβ1 | 60 |
5743.1 | 66 | Dy | Ll | 4 |
5788.5 | 57 | La | Lγ1 | 9 |
5816.6 | 63 | Eu | Lα2 | 11 |
5845.7 | 63 | Eu | Lα1 | 100 |
5850 | 59 | Pr | Lβ2,15 | 21 |
5887.6 | 25 | Mn | Kα2 | 50 |
5898.8 | 25 | Mn | Kα1 | 100 |
5943.4 | 67 | Ho | Ll | 4 |
5946.7 | 24 | Cr | Kβ1,3 | 15 |
5961 | 61 | Pm | Lβ1 | 61 |
6025.0 | 64 | Gd | Lα2 | 11 |
6052 | 58 | Ce | Lγ1 | 9 |
6057.2 | 64 | Gd | Lα1 | 100 |
6089.4 | 60 | Nd | Lβ2,15 | 21 |
6152 | 68 | Er | Ll | 4 |
6205.1 | 62 | Sm | Lβ1 | 61 |
6238.0 | 65 | Tb | Lα2 | 11 |
6272.8 | 65 | Tb | Lα1 | 100 |
6322.1 | 59 | Pr | Lγ1 | 9 |
6339 | 61 | Pm | Lβ2 | 21 |
6341.9 | 69 | Tm | Ll | 4 |
6390.8 | 26 | Fe | Kα2 | 50 |
6403.8 | 26 | Fe | Kα1 | 100 |
6456.4 | 63 | Eu | Lβ1 | 62 |
6457.7 | 66 | Dy | Lα2 | 11 |
6490.4 | 25 | Mn | Kβ1,3 | 17 |
6495.2 | 66 | Dy | Lα1 | 100 |
6545.5 | 70 | Yb | Ll | 4 |
6587.0 | 62 | Sm | Lβ2,15 | 21 |
6602.1 | 60 | Nd | Lγ1 | 10 |
6679.5 | 67 | Ho | Lα2 | 11 |
6713.2 | 64 | Gd | Lβ1 | 62 |
6719.8 | 67 | Ho | Lα1 | 100 |
6752.8 | 71 | Lu | Ll | 4 |
6843.2 | 63 | Eu | Lβ2,15 | 21 |
6892 | 61 | Pm | Lγ1 | 10 |
6905.0 | 68 | Er | Lα2 | 11 |
6915.3 | 27 | Co | Kα2 | 51 |
6930.3 | 27 | Co | Kα1 | 100 |
6948.7 | 68 | Er | Lα1 | 100 |
6959.6 | 72 | Hf | Ll | 5 |
6978 | 65 | Tb | Lβ1 | 61 |
7058.0 | 26 | Fe | Kβ1,3 | 17 |
7102.8 | 64 | Gd | Lβ2,15 | 21 |
7133.1 | 69 | Tm | Lα2 | 11 |
7173.1 | 73 | Ta | Ll | 5 |
7178.0 | 62 | Sm | Lγ1 | 10 |
7179.9 | 69 | Tm | Lα1 | 100 |
7247.7 | 66 | Dy | Lβ1 | 62 |
7366.7 | 65 | Tb | Lβ2,15 | 21 |
7367.3 | 70 | Yb | Lα2 | 11 |
7387.8 | 74 | W | Ll | 5 |
7415.6 | 70 | Yb | Lα1 | 100 |
7460.9 | 28 | Ni | Kα2 | 51 |
7478.2 | 28 | Ni | Kα1 | 100 |
7480.3 | 63 | Eu | Lγ1 | 10 |
7525.3 | 67 | Ho | Lβ1 | 64 |
7603.6 | 75 | Re | Ll | 5 |
7604.9 | 71 | Lu | Lα2 | 11 |
7635.7 | 66 | Dy | Lβ2 | 20 |
7649.4 | 27 | Co | Kβ1,3 | 17 |
7655.5 | 71 | Lu | Lα1 | 100 |
7785.8 | 64 | Gd | Lγ1 | 11 |
7810.9 | 68 | Er | Lβ1 | 64 |
7822.2 | 76 | Os | Ll | 5 |
7844.6 | 72 | Hf | Lα2 | 11 |
7899.0 | 72 | Hf | Lα1 | 100 |
7911 | 67 | Ho | Lβ2,15 | 20 |
8027.8 | 29 | Cu | Kα2 | 51 |
8045.8 | 77 | Ir | Ll | 5 |
8047.8 | 29 | Cu | Kα1 | 100 |
8087.9 | 73 | Ta | Lα2 | 11 |
8101 | 69 | Tm | Lβ1 | 64 |
8102 | 65 | Tb | Lγ1 | 11 |
8146.1 | 73 | Ta | Lα1 | 100 |
8189.0 | 68 | Er | Lβ2,15 | 20 |
8264.7 | 28 | Ni | Kβ1,3 | 17 |
8268 | 78 | Pt | Ll | 5 |
8335.2 | 74 | W | Lα2 | 11 |
8397.6 | 74 | W | Lα1 | 100 |
8401.8 | 70 | Yb | Lβ1 | 65 |
8418.8 | 66 | Dy | Lγ1 | 11 |
8468 | 69 | Tm | Lβ2,15 | 20 |
8493.9 | 79 | Au | Ll | 5 |
8586.2 | 75 | Re | Lα2 | 11 |
8615.8 | 30 | Zn | Kα2 | 51 |
8638.9 | 30 | Zn | Kα1 | 100 |
8652.5 | 75 | Re | Lα1 | 100 |
8709.0 | 71 | Lu | Lβ1 | 66 |
8721.0 | 80 | Hg | Ll | 5 |
8747 | 67 | Ho | Lγ1 | 11 |
8758.8 | 70 | Yb | Lβ2,15 | 20 |
8841.0 | 76 | Os | Lα2 | 11 |
8905.3 | 29 | Cu | Kβ1,3 | 17 |
8911.7 | 76 | Os | Lα1 | 100 |
8953.2 | 81 | Tl | Ll | 6 |
9022.7 | 72 | Hf | Lβ1 | 67 |
9048.9 | 71 | Lu | Lβ2 | 19 |
9089 | 68 | Er | Lγ1 | 11 |
9099.5 | 77 | Ir | Lα2 | 11 |
9175.1 | 77 | Ir | Lα1 | 100 |
9184.5 | 82 | Pb | Ll | 6 |
9224.8 | 31 | Ga | Kα2 | 51 |
9251.7 | 31 | Ga | Kα1 | 100 |
9343.1 | 73 | Ta | Lβ1 | 67 |
9347.3 | 72 | Hf | Lβ2 | 20 |
9361.8 | 78 | Pt | Lα2 | 11 |
9420.4 | 83 | Bi | Ll | 6 |
9426 | 69 | Tm | Lγ1 | 12 |
9442.3 | 78 | Pt | Lα1 | 100 |
9572.0 | 30 | Zn | Kβ1,3 | 17 |
9628.0 | 79 | Au | Lα2 | 11 |
9651.8 | 73 | Ta | Lβ2 | 20 |
9672.4 | 74 | W | Lβ1 | 67 |
9713.3 | 79 | Au | Lα1 | 100 |
9780.1 | 70 | Yb | Lγ1 | 12 |
9855.3 | 32 | Ge | Kα2 | 51 |
9886.4 | 32 | Ge | Kα1 | 100 |
9897.6 | 80 | Hg | Lα2 | 11 |
9961.5 | 74 | W | Lβ2 | 21 |
9988.8 | 80 | Hg | Lα1 | 100 |
10010.0 | 75 | Re | Lβ1 | 66 |
10143.4 | 71 | Lu | Lγ1 | 12 |
10172.8 | 81 | Tl | Lα2 | 11 |
10260.3 | 31 | Ga | Kβ3 | 5 |
10264.2 | 31 | Ga | Kβ1 | 66 |
10268.5 | 81 | Tl | Lα1 | 100 |
10275.2 | 75 | Re | Lβ2 | 22 |
10355.3 | 76 | Os | Lβ1 | 67 |
10449.5 | 82 | Pb | Lα2 | 11 |
10508.0 | 33 | As | Kα2 | 51 |
10515.8 | 72 | Hf | Lγ1 | 12 |
10543.7 | 33 | As | Kα1 | 100 |
10551.5 | 82 | Pb | Lα1 | 100 |
10598.5 | 76 | Os | Lβ2 | 22 |
10708.3 | 77 | Ir | Lβ1 | 66 |
10730.9 | 83 | Bi | Lα2 | 11 |
10838.8 | 83 | Bi | Lα1 | 100 |
10895.2 | 73 | Ta | Lγ1 | 12 |
10920.3 | 77 | Ir | Lβ2 | 22 |
10978.0 | 32 | Ge | Kβ3 | 6 |
10982.1 | 32 | Ge | Kβ1 | 60 |
11070.7 | 78 | Pt | Lβ1 | 67 |
11118.6 | 90 | Th | Ll | 6 |
11181.4 | 34 | Se | Kα2 | 52 |
11222.4 | 34 | Se | Kα1 | 100 |
11250.5 | 78 | Pt | Lβ2 | 23 |
11285.9 | 74 | W | Lγ1 | 13 |
11442.3 | 79 | Au | Lβ1 | 67 |
11584.7 | 79 | Au | Lβ2 | 23 |
11618.3 | 92 | U | Ll | 7 |
11685.4 | 75 | Re | Lγ1 | 13 |
11720.3 | 33 | As | Kβ3 | 6 |
11726.2 | 33 | As | Kβ1 | 13 |
11822.6 | 80 | Hg | Lβ1 | 67 |
11864 | 33 | As | Kβ2 | 1 |
11877.6 | 35 | Br | Kα2 | 52 |
11924.1 | 80 | Hg | Lβ2 | 24 |
11924.2 | 35 | Br | Kα1 | 100 |
12095.3 | 76 | Os | Lγ1 | 13 |
12213.3 | 81 | Tl | Lβ1 | 67 |
12271.5 | 81 | Tl | Lβ2 | 25 |
12489.6 | 34 | Se | Kβ3 | 6 |
12495.9 | 34 | Se | Kβ1 | 13 |
12512.6 | 77 | Ir | Lγ1 | 13 |
12598 | 36 | Kr | Kα2 | 52 |
12613.7 | 82 | Pb | Lβ1 | 66 |
12622.6 | 82 | Pb | Lβ2 | 25 |
12649 | 36 | Kr | Kα1 | 100 |
12652 | 34 | Se | Kβ2 | 1 |
12809.6 | 90 | Th | Lα2 | 11 |
12942.0 | 78 | Pt | Lγ1 | 13 |
12968.7 | 90 | Th | Lα1 | 100 |
12979.9 | 83 | Bi | Lβ2 | 25 |
13023.5 | 83 | Bi | Lβ1 | 67 |
13284.5 | 35 | Br | Kβ3 | 7 |
13291.4 | 35 | Br | Kβ1 | 14 |
13335.8 | 37 | Rb | Kα2 | 52 |
13381.7 | 79 | Au | Lγ1 | 13 |
13395.3 | 37 | Rb | Kα1 | 100 |
13438.8 | 92 | U | Lα2 | 11 |
13469.5 | 35 | Br | Kβ2 | 1 |
13614.7 | 92 | U | Lα1 | 100 |
13830.1 | 80 | Hg | Lγ1 | 14 |
14097.9 | 38 | Sr | Kα2 | 52 |
14104 | 36 | Kr | Kβ3 | 7 |
14112 | 36 | Kr | Kβ1 | 14 |
14165.0 | 38 | Sr | Kα1 | 100 |
14291.5 | 81 | Tl | Lγ1 | 14 |
14315 | 36 | Kr | Kβ2 | 2 |
14764.4 | 82 | Pb | Lγ1 | 14 |
14882.9 | 39 | Y | Kα2 | 52 |
14951.7 | 37 | Rb | Kβ3 | 7 |
14958.4 | 39 | Y | Kα1 | 100 |
14961.3 | 37 | Rb | Kβ1 | 14 |
15185 | 37 | Rb | Kβ2 | 2 |
15247.7 | 83 | Bi | Lγ1 | 14 |
15623.7 | 90 | Th | Lβ2 | 26 |
15690.9 | 40 | Zr | Kα2 | 52 |
15775.1 | 40 | Zr | Kα1 | 100 |
15824.9 | 38 | Sr | Kβ3 | 7 |
15835.7 | 38 | Sr | Kβ1 | 14 |
16084.6 | 38 | Sr | Kβ2 | 3 |
16202.2 | 90 | Th | Lβ1 | 69 |
16428.3 | 92 | U | Lβ2 | 26 |
16521.0 | 41 | Nb | Kα2 | 52 |
16615.1 | 41 | Nb | Kα1 | 100 |
16725.8 | 39 | Y | Kβ3 | 8 |
16737.8 | 39 | Y | Kβ1 | 15 |
17015.4 | 39 | Y | Kβ2 | 3 |
17220.0 | 92 | U | Lβ1 | 61 |
17374.3 | 42 | Mo | Kα2 | 52 |
17479.3 | 42 | Mo | Kα1 | 100 |
17654 | 40 | Zr | Kβ3 | 8 |
17667.8 | 40 | Zr | Kβ1 | 15 |
17970 | 40 | Zr | Kβ2 | 3 |
18250.8 | 43 | Tc | Kα2 | 53 |
18367.1 | 43 | Tc | Kα1 | 100 |
18606.3 | 41 | Nb | Kβ3 | 8 |
18622.5 | 41 | Nb | Kβ1 | 15 |
18953 | 41 | Nb | Kβ2 | 3 |
18982.5 | 90 | Th | Lγ1 | 16 |
19150.4 | 44 | Ru | Kα2 | 53 |
19279.2 | 44 | Ru | Kα1 | 100 |
19590.3 | 42 | Mo | Kβ3 | 8 |
19608.3 | 42 | Mo | Kβ1 | 15 |
19965.2 | 42 | Mo | Kβ2 | 3 |
20073.7 | 45 | Rh | Kα2 | 53 |
20167.1 | 92 | U | Lγ1 | 15 |
20216.1 | 45 | Rh | Kα1 | 100 |
20599 | 43 | Tc | Kβ3 | 8 |
20619 | 43 | Tc | Kβ1 | 16 |
21005 | 43 | Tc | Kβ2 | 4 |
21020.1 | 46 | Pd | Kα2 | 53 |
21177.1 | 46 | Pd | Kα1 | 100 |
21634.6 | 44 | Ru | Kβ3 | 8 |
21656.8 | 44 | Ru | Kβ1 | 16 |
21990.3 | 47 | Ag | Kα2 | 53 |
22074 | 44 | Ru | Kβ2 | 4 |
22162.9 | 47 | Ag | Kα1 | 100 |
22698.9 | 45 | Rh | Kβ3 | 8 |
22723.6 | 45 | Rh | Kβ1 | 16 |
22984.1 | 48 | Cd | Kα2 | 53 |
23172.8 | 45 | Rh | Kβ2 | 4 |
23173.6 | 48 | Cd | Kα1 | 100 |
23791.1 | 46 | Pd | Kβ3 | 8 |
23818.7 | 46 | Pd | Kβ1 | 16 |
24002.0 | 49 | In | Kα2 | 53 |
24209.7 | 49 | In | Kα1 | 100 |
24299.1 | 46 | Pd | Kβ2 | 4 |
24911.5 | 47 | Ag | Kβ3 | 9 |
24942.4 | 47 | Ag | Kβ1 | 16 |
25044.0 | 50 | Sn | Kα2 | 53 |
25271.3 | 50 | Sn | Kα1 | 100 |
25456.4 | 47 | Ag | Kβ2 | 4 |
26061.2 | 48 | Cd | Kβ3 | 9 |
26095.5 | 48 | Cd | Kβ1 | 17 |
26110.8 | 51 | Sb | Kα2 | 54 |
26359.1 | 51 | Sb | Kα1 | 100 |
26643.8 | 48 | Cd | Kβ2 | 4 |
27201.7 | 52 | Te | Kα2 | 54 |
27237.7 | 49 | In | Kβ3 | 9 |
27275.9 | 49 | In | Kβ1 | 17 |
27472.3 | 52 | Te | Kα1 | 100 |
27860.8 | 49 | In | Kβ2 | 5 |
28317.2 | 53 | I | Kα2 | 54 |
28444.0 | 50 | Sn | Kβ3 | 9 |
28486.0 | 50 | Sn | Kβ1 | 17 |
28612.0 | 53 | I | Kα1 | 100 |
29109.3 | 50 | Sn | Kβ2 | 5 |
29458 | 54 | Xe | Kα2 | 54 |
29679.2 | 51 | Sb | Kβ3 | 9 |
29725.6 | 51 | Sb | Kβ1 | 18 |
29779 | 54 | Xe | Kα1 | 100 |
30389.5 | 51 | Sb | Kβ2 | 5 |
30625.1 | 55 | Cs | Kα2 | 54 |
30944.3 | 52 | Te | Kβ3 | 9 |
30972.8 | 55 | Cs | Kα1 | 100 |
30995.7 | 52 | Te | Kβ1 | 18 |
31700.4 | 52 | Te | Kβ2 | 5 |
31817.1 | 56 | Ba | Kα2 | 54 |
32193.6 | 56 | Ba | Kα1 | 100 |
32239.4 | 53 | I | Kβ3 | 9 |
32294.7 | 53 | I | Kβ1 | 18 |
33034.1 | 57 | La | Kα2 | 54 |
33042 | 53 | I | Kβ2 | 5 |
33441.8 | 57 | La | Kα1 | 100 |
33562 | 54 | Xe | Kβ3 | 9 |
33624 | 54 | Xe | Kβ1 | 18 |
34278.9 | 58 | Ce | Kα2 | 55 |
34415 | 54 | Xe | Kβ2 | 5 |
34719.7 | 58 | Ce | Kα1 | 100 |
34919.4 | 55 | Cs | Kβ3 | 9 |
34986.9 | 55 | Cs | Kβ1 | 18 |
35550.2 | 59 | Pr | Kα2 | 55 |
35822 | 55 | Cs | Kβ2 | 6 |
36026.3 | 59 | Pr | Kα1 | 100 |
36304.0 | 56 | Ba | Kβ3 | 10 |
36378.2 | 56 | Ba | Kβ1 | 18 |
36847.4 | 60 | Nd | Kα2 | 55 |
37257 | 56 | Ba | Kβ2 | 6 |
37361.0 | 60 | Nd | Kα1 | 100 |
37720.2 | 57 | La | Kβ3 | 10 |
37801.0 | 57 | La | Kβ1 | 19 |
38171.2 | 61 | Pm | Kα2 | 55 |
38724.7 | 61 | Pm | Kα1 | 100 |
38729.9 | 57 | La | Kβ2 | 6 |
39170.1 | 58 | Ce | Kβ3 | 10 |
39257.3 | 58 | Ce | Kβ1 | 19 |
39522.4 | 62 | Sm | Kα2 | 55 |
40118.1 | 62 | Sm | Kα1 | 100 |
40233 | 58 | Ce | Kβ2 | 6 |
40652.9 | 59 | Pr | Kβ3 | 10 |
40748.2 | 59 | Pr | Kβ1 | 19 |
40901.9 | 63 | Eu | Kα2 | 56 |
41542.2 | 63 | Eu | Kα1 | 100 |
41773 | 59 | Pr | Kβ2 | 6 |
42166.5 | 60 | Nd | Kβ3 | 10 |
42271.3 | 60 | Nd | Kβ1 | 19 |
42308.9 | 64 | Gd | Kα2 | 56 |
42996.2 | 64 | Gd | Kα1 | 100 |
43335 | 60 | Nd | Kβ2 | 6 |
43713 | 61 | Pm | Kβ3 | 10 |
43744.1 | 65 | Tb | Kα2 | 56 |
43826 | 61 | Pm | Kβ1 | 19 |
44481.6 | 65 | Tb | Kα1 | 100 |
44942 | 61 | Pm | Kβ2 | 6 |
45207.8 | 66 | Dy | Kα2 | 56 |
45289 | 62 | Sm | Kβ3 | 10 |
45413 | 62 | Sm | Kβ1 | 19 |
45998.4 | 66 | Dy | Kα1 | 100 |
46578 | 62 | Sm | Kβ2 | 6 |
46699.7 | 67 | Ho | Kα2 | 56 |
46903.6 | 63 | Eu | Kβ3 | 10 |
47037.9 | 63 | Eu | Kβ1 | 19 |
47546.7 | 67 | Ho | Kα1 | 100 |
48221.1 | 68 | Er | Kα2 | 56 |
48256 | 63 | Eu | Kβ2 | 6 |
48555 | 64 | Gd | Kβ3 | 10 |
48697 | 64 | Gd | Kβ1 | 20 |
49127.7 | 68 | Er | Kα1 | 100 |
49772.6 | 69 | Tm | Kα2 | 57 |
49959 | 64 | Gd | Kβ2 | 7 |
50229 | 65 | Tb | Kβ3 | 10 |
50382 | 65 | Tb | Kβ1 | 20 |
50741.6 | 69 | Tm | Kα1 | 100 |
51354.0 | 70 | Yb | Kα2 | 57 |
51698 | 65 | Tb | Kβ2 | 7 |
51957 | 66 | Dy | Kβ3 | 10 |
52119 | 66 | Dy | Kβ1 | 20 |
52388.9 | 70 | Yb | Kα1 | 100 |
52965.0 | 71 | Lu | Kα2 | 57 |
53476 | 66 | Dy | Kβ2 | 7 |
53711 | 67 | Ho | Kβ3 | 11 |
53877 | 67 | Ho | Kβ1 | 20 |
54069.8 | 71 | Lu | Kα1 | 100 |
54611.4 | 72 | Hf | Kα2 | 57 |
55293 | 67 | Ho | Kβ2 | 7 |
55494 | 68 | Er | Kβ3 | 11 |
55681 | 68 | Er | Kβ1 | 21 |
55790.2 | 72 | Hf | Kα1 | 100 |
56277 | 73 | Ta | Kα2 | 57 |
57210 | 68 | Er | Kβ2 | 7 |
57304 | 69 | Tm | Kβ3 | 11 |
57517 | 69 | Tm | Kβ1 | 21 |
57532 | 73 | Ta | Kα1 | 100 |
57981.7 | 74 | W | Kα2 | 58 |
59090 | 69 | Tm | Kβ2 | 7 |
59140 | 70 | Yb | Kβ3 | 11 |
59318.2 | 74 | W | Kα1 | 100 |
59370 | 70 | Yb | Kβ1 | 21 |
59717.9 | 75 | Re | Kα2 | 58 |
60980 | 70 | Yb | Kβ2 | 7 |
61050 | 71 | Lu | Kβ3 | 11 |
61140.3 | 75 | Re | Kα1 | 100 |
61283 | 71 | Lu | Kβ1 | 21 |
61486.7 | 76 | Os | Kα2 | 58 |
62970 | 71 | Lu | Kβ2 | 7 |
62980 | 72 | Hf | Kβ3 | 11 |
63000.5 | 76 | Os | Kα1 | 100 |
63234 | 72 | Hf | Kβ1 | 22 |
63286.7 | 77 | Ir | Kα2 | 58 |
64895.6 | 77 | Ir | Kα1 | 100 |
64948.8 | 73 | Ta | Kβ3 | 11 |
64980 | 72 | Hf | Kβ2 | 7 |
65112 | 78 | Pt | Kα2 | 58 |
65223 | 73 | Ta | Kβ1 | 22 |
66832 | 78 | Pt | Kα1 | 100 |
66951.4 | 74 | W | Kβ3 | 11 |
66989.5 | 79 | Au | Kα2 | 59 |
66990 | 73 | Ta | Kβ2 | 7 |
67244.3 | 74 | W | Kβ1 | 22 |
68803.7 | 79 | Au | Kα1 | 100 |
68895 | 80 | Hg | Kα2 | 59 |
68994 | 75 | Re | Kβ3 | 12 |
69067 | 74 | W | Kβ2 | 8 |
69310 | 75 | Re | Kβ1 | 22 |
70819 | 80 | Hg | Kα1 | 100 |
70831.9 | 81 | Tl | Kα2 | 60 |
71077 | 76 | Os | Kβ3 | 12 |
71232 | 75 | Re | Kβ2 | 8 |
71413 | 76 | Os | Kβ1 | 23 |
72804.2 | 82 | Pb | Kα2 | 60 |
72871.5 | 81 | Tl | Kα1 | 100 |
73202.7 | 77 | Ir | Kβ3 | 12 |
73363 | 76 | Os | Kβ2 | 8 |
73560.8 | 77 | Ir | Kβ1 | 23 |
74814.8 | 83 | Bi | Kα2 | 60 |
74969.4 | 82 | Pb | Kα1 | 100 |
75368 | 78 | Pt | Kβ3 | 12 |
75575 | 77 | Ir | Kβ2 | 8 |
75748 | 78 | Pt | Kβ1 | 23 |
77107.9 | 83 | Bi | Kα1 | 100 |
77580 | 79 | Au | Kβ3 | 12 |
77850 | 78 | Pt | Kβ2 | 8 |
77984 | 79 | Au | Kβ1 | 23 |
79822 | 80 | Hg | Kβ3 | 12 |
80150 | 79 | Au | Kβ2 | 8 |
80253 | 80 | Hg | Kβ1 | 23 |
82118 | 81 | Tl | Kβ3 | 12 |
82515 | 80 | Hg | Kβ2 | 8 |
82576 | 81 | Tl | Kβ1 | 23 |
84450 | 82 | Pb | Kβ3 | 12 |
84910 | 81 | Tl | Kβ2 | 8 |
84936 | 82 | Pb | Kβ1 | 23 |
86834 | 83 | Bi | Kβ3 | 12 |
87320 | 82 | Pb | Kβ2 | 8 |
87343 | 83 | Bi | Kβ1 | 23 |
89830 | 83 | Bi | Kβ2 | 9 |
89953 | 90 | Th | Kα2 | 62 |
93350 | 90 | Th | Kα1 | 100 |
94665 | 92 | U | Kα2 | 62 |
98439 | 92 | U | Kα1 | 100 |
104831 | 90 | Th | Kβ3 | 12 |
105609 | 90 | Th | Kβ1 | 24 |
108640 | 90 | Th | Kβ2 | 9 |
110406 | 92 | U | Kβ3 | 13 |
111300 | 92 | U | Kβ1 | 24 |
114530 | 92 | U | Kβ2 | 9 |
4.4. Explanation of reconstruction
import ingrid / tos_helpers import ingridDatabase / [databaseRead, databaseDefinitions] import ggplotnim, nimhdf5, cligen proc main(file: string, head = 100, run = 0) = # 1. first plot events with more than 1 cluster using ToT as scale # 2. plot same events with clusters shown as separate # 3. plot cluster center (X), long axis, length, eccentricity, σ_T, σ_L, circle # of σ_T withH5(file, "r"): let fileInfo = getFileInfo(h5f) let run = if run == 0: fileInfo.runs[0] else: run let df = h5f.readAllDsets(run, chip = 3) echo df let septemDf = h5f.getSeptemDataFrame(run, allowedChips = @[3], ToT = true) echo septemDf var i = 0 for tup, subDf in groups(septemDf.group_by("eventNumber")): if i >= head: break if subDf.unique("cluster").len == 1: continue ggplot(subDf, aes("x", "y", color = "ToT")) + geom_point() + xlim(0, 256) + ylim(0, 256) + ggsave("/tmp/events/run_" & $run & "_event_" & $i & ".pdf") ggplot(subDf, aes("x", "y", color = "cluster", shape = "cluster")) + geom_point() + xlim(0, 256) + ylim(0, 256) + ggsave("/tmp/events/run_" & $run & "_event_" & $i & "_color_cluster.pdf") ggplot(subDf, aes("x", "y", color = "ToT", shape = "cluster")) + geom_point() + xlim(0, 256) + ylim(0, 256) + ggsave("/tmp/events/run_" & $run & "_event_" & $i & "_clustered.pdf") ## group again by cluster, ssDf ## - filter `df` to the correct event number (and cluster, uhh), event index? yes! ## - get center ## - get rotation angle ## - line through center & rot angle around center length - to max ## - inc i when isMainModule: dispatch main
4.5. Detector related
4.5.1. Water cooling
A short measurement of the flow rate of the water cooling system done at
in the normal lab at the PI using a half open system (reservoir input open, output connected to cooling, cooling output into a reservoir), we measured:1.6 L in 5:21 min
import unchained defUnit(L•min⁻¹) let vol = 1.6.Liter let time = 5.Minute + 21.Second echo "Flow rate: ", (vol / time).to(L•min⁻¹)
4.6. Data reconstruction
Data reconstruction of all CAST data can be done using
runAnalysisChain
by:
cd $TPA/Analysis/ingrid ./runAnalysisChain -i ~/CastData/data \ --outpath ~/CastData/data \ --years 2017 --years 2018 \ --calib --back \ --reco
(where the paths must be correct of course!) if starting from the
already parsed raw data (i.e. H5 inputs). Otherwise --raw
is also
needed.
Afterwards need to add the tracking information to the final H5 files by doing:
./cast_log_reader tracking \ -p ../resources/LogFiles/tracking-logs \ --startTime 2018/05/01 \ --endTime 2018/12/31 \ --h5out ~/CastData/data/DataRuns2018_Reco.h5 \ --dryRun
With the dryRun
option you are only presented with what would be
written. Run without to actually add the data.
And the equivalent for the Run-2 data, adjusting the start and end
time as needed.
./cast_log_reader tracking \ -p ../resources/LogFiles/tracking-logs \ --startTime 2017/01/01 \ --endTime 2018/05/01 \ --h5out ~/CastData/data/DataRuns2017_Reco.h5 \ --dryRun
5. CDL measurements
To derive the background rate plots a likelihood method is used. Basically a likelihood distribution is built from 3 geometric properties of extracted pixel clusters:
- eccentricity
- length / transverse RMS
- fraction of pixels within transverse RMS
To define these distributions however a set of X-ray pure datasets is needed. In addition the geometric properties above are highly dependent on the X-ray's energy, see:
where the left plot compares the \(\ce{Mn}\) line (\(^{55}\ce{Fe}\) equivalent) to the \(\ce{Cu}\) line from \(\SI{0.9}{\kilo\volt}\) electrons and the right plot compres \(^{55}\ce{Fe}\) with typical cosmic background. Obvious that a single cut value will result in wildly different signal efficiencies and background rejections. Thus, take different distributions for different energies.
The distributions which the previous background rate plots were based on were obtained in 2014 with the Run-1 detector at the CAST Detector Lab (CDL). Using a different detector for this extremely sensitive part of the analysis chain will obviously introduce systematic errors. Thus, new calibration data was taken with the current Run-2 and Run-3 detector from 15-19 Feb 2019. A summary of the target filter combinations, applied HV and resulting pixel peak position is shown in 4 and the fluorescence lines these target filter combinations correspond to are listed in tab. 5.
Run # | FADC? | Target | Filter | HV / kV | \(\langle\mu_{\text{peak}}\rangle\) | \(\Delta\mu\) |
---|---|---|---|---|---|---|
315 | y | Mn | Cr | 12.0 | 223.89 | 8.79 |
319 | y | Cu | Ni | 15.0 | 347.77 | 8.49 |
320 | n | Cu | Ni | 15.0 | 323.23 | 21.81 |
323 | n | Mn | Cr | 12.0 | 224.78 | 8.92 |
325 | y | Ti | Ti | 9.0 | 176.51 | 1.22 |
326 | n | Ti | Ti | 9.0 | 173.20 | 2.20 |
328 | y | Ag | Ag | 6.0 | 117.23 | 2.02 |
329 | n | Ag | Ag | 6.0 | 118.66 | 1.21 |
332 | y | Al | Al | 4.0 | 55.36 | 1.26 |
333 | n | Al | Al | 4.0 | 54.79 | 2.33 |
335 | y | Cu | EPIC | 2.0 | 32.33 | 2.52 |
336 | n | Cu | EPIC | 2.0 | 33.95 | 0.67 |
337 | n | Cu | EPIC | 2.0 | 31.51 | 4.76 |
339 | y | Cu | EPIC | 0.9 | 25.00 | 0.79 |
340 | n | Cu | EPIC | 0.9 | 21.39 | 2.27 |
342 | y | C | EPIC | 0.6 | 18.04 | 1.46 |
343 | n | C | EPIC | 0.6 | 17.16 | 0.57 |
345 | y | Cu | Ni | 15.0 | 271.16 | 6.08 |
347 | y | Mn | Cr | 12.0 | 198.73 | 4.72 |
349 | y | Ti | Ti | 9.0 | 160.86 | 1.25 |
351 | y | Ag | Ag | 6.0 | 106.94 | 2.55 |
Target | Filter | HV | line | Name in Marlin | Energy / keV |
---|---|---|---|---|---|
Cu | Ni | 15 | \(\ce{Cu}\) \(\text{K}_{\alpha}\) | A | 8.04 |
Mn | Cr | 12 | \(\ce{Mn}\) \(\text{K}_{\alpha}\) | B | 5.89 |
Ti | Ti | 9 | \(\ce{Ti}\) \(\text{K}_{\alpha}\) | C | 4.51 |
Ag | Ag | 6 | \(\ce{Ag}\) \(\text{L}_{\alpha}\) | D | 2.98 |
Al | Al | 4 | \(\ce{Al}\) \(\text{K}_{\alpha}\) | E | 1.49 |
Cu | EPIC | 2 | \(\ce{Cu}\) \(\text{L}_{\alpha}\) | F | 0.930 |
Cu | EPIC | 0.9 | \(\ce{O }\) \(\text{K}_{\alpha}\) | G | 0.525 |
C | EPIC | 0.6 | \(\ce{C }\) \(\text{K}_{\alpha}\) | H | 0.277 |
For a reference of the X-ray fluorescence lines (for more exact values and \(\alpha_1\), \(\alpha_2\) values etc.) see: https://xdb.lbl.gov/Section1/Table_1-2.pdf.
The raw data is combined by target / filter combinations. To clean the data somewhat a few simple cuts are applied, as shown in tab. 15.
Target | Filter | line | HV | length | rmsTmin | rmsTmax | eccentricity |
---|---|---|---|---|---|---|---|
Cu | Ni | \(\ce{Cu}\) \(\text{K}_{\alpha}\) | 15 | 0.1 | 1.0 | 1.3 | |
Mn | Cr | \(\ce{Mn}\) \(\text{K}_{\alpha}\) | 12 | 0.1 | 1.0 | 1.3 | |
Ti | Ti | \(\ce{Ti}\) \(\text{K}_{\alpha}\) | 9 | 0.1 | 1.0 | 1.3 | |
Ag | Ag | \(\ce{Ag}\) \(\text{L}_{\alpha}\) | 6 | 6.0 | 0.1 | 1.0 | 1.4 |
Al | Al | \(\ce{Al}\) \(\text{K}_{\alpha}\) | 4 | 0.1 | 1.1 | 2.0 | |
Cu | EPIC | \(\ce{Cu}\) \(\text{L}_{\alpha}\) | 2 | 0.1 | 1.1 | 2.0 | |
Cu | EPIC | \(\ce{O }\) \(\text{K}_{\alpha}\) | 0.9 | 0.1 | 1.1 | 2.0 | |
C | EPIC | \(\ce{C }\) \(\text{K}_{\alpha}\) | 0.6 | 6.0 | 0.1 | 1.1 |
With these in place both to the pixel as well as charge spectra a mixture of gaussian / exponential gaussian functions is fitted.
Specifically the gaussian:
and exponential gaussian:
where the constant \(c\) is chosen such that the resulting function is continuous.
The functions fitted to the different spectra then depend on which fluorescence lines are visible. The full list of all combinations is shown in tab. 7 and 8.
Target | Filter | line | HV | Fit function |
---|---|---|---|---|
Cu | Ni | \(\ce{Cu}\) \(\text{K}_{\alpha}\) | 15 | \(EG^{\mathrm{Cu,esc}}_{\mathrm{K}_{\alpha}}(a,b,N,\mu,\sigma) + EG^{\mathrm{Cu}}_{\mathrm{K}_{\alpha}}(a,b,N,\mu,\sigma)\) |
Mn | Cr | \(\ce{Mn}\) \(\text{K}_{\alpha}\) | 12 | \(EG^{\mathrm{Mn,esc}}_{\mathrm{K}_{\alpha}}(a,b,N,\mu,\sigma) + EG^{\mathrm{Mn}}_{\mathrm{K}_{\alpha}}(a,b,N,\mu,\sigma)\) |
Ti | Ti | \(\ce{Ti}\) \(\text{K}_{\alpha}\) | 9 | \(G^{\mathrm{Ti,esc}}_{\mathrm{K}_{\alpha}}(N,\mu,\sigma) + G^{\mathrm{Ti}}_{\mathrm{K}_{\beta}}(N,\mu,\sigma) + EG^{Ti}_{K_{\alpha}}(a,b,N,\mu,\sigma) + G^{Ti}_{K_{\beta}}(N,\mu,\sigma)\) |
Ag | Ag | \(\ce{Ag}\) \(\text{L}_{\alpha}\) | 6 | \(EG^{\mathrm{Ag}}_{\mathrm{L}_{\alpha}}(a,b,N,\mu,\sigma) + G^{\mathrm{Ag}}_{\mathrm{L}_{\beta}}(N,\mu,\sigma)\) |
Al | Al | \(\ce{Al}\) \(\text{K}_{\alpha}\) | 4 | \(EG^{\mathrm{Al}}_{\mathrm{K}_{\alpha}}(a,b,N,\mu,\sigma)\) |
Cu | EPIC | \(\ce{Cu}\) \(\text{L}_{\alpha}\) | 2 | \(G^{\mathrm{Cu}}_{\mathrm{K}_{\alpha}}(N,\mu,\sigma)\) |
Cu | EPIC | \(\ce{O }\) \(\text{K}_{\alpha}\) | 0.9 | \(G^{\mathrm{O}}_{\mathrm{K}_{\alpha}}(N,\mu,\sigma) + G^{\mathrm{C}}_{\mathrm{K}_{\alpha}}(N,\mu,\sigma) + G^{\mathrm{Fe,esc}}_{L_{\alpha,\beta}}(N,\mu,\sigma) + G^{\mathrm{Ni}}_{L_{\alpha,\beta}}(N,\mu,\sigma)\) |
C | EPIC | \(\ce{C }\) \(\text{K}_{\alpha}\) | 0.6 | \(G^{\mathrm{C}}_{\mathrm{K}_{\alpha}}(N,\mu,\sigma) + G^{\mathrm{O}}_{\mathrm{K}_{\alpha}}(N,\mu,\sigma)\) |
Target | Filter | line | HV | fit functions |
---|---|---|---|---|
Cu | Ni | \(\ce{Cu}\) \(\text{K}_{\alpha}\) | 15 | \(G^{\mathrm{Cu,esc}}_{\mathrm{K}_{\alpha}}(N,\mu,\sigma) + G^{\mathrm{Cu}}_{\mathrm{K}_{\alpha}}(N,\mu,\sigma)\) |
Mn | Cr | \(\ce{Mn}\) \(\text{K}_{\alpha}\) | 12 | \(G^{\mathrm{Mn,esc}}_{\mathrm{K}_{\alpha}}(N,\mu,\sigma) + G^{\mathrm{Mn}}_{\mathrm{K}_{\alpha}}(N,\mu,\sigma)\) |
Ti | Ti | \(\ce{Ti}\) \(\text{K}_{\alpha}\) | 9 | \(G^{\mathrm{Ti,esc}}_{\mathrm{K}_{\alpha}}(N,\mu,\sigma) + G^{\mathrm{Ti}}_{\mathrm{K}_{\beta}}(N,\mu,\sigma) + G^{\mathrm{Ti}}_{\mathrm{K}_{\alpha}}(N,\mu,\sigma) + G^{\mathrm{Ti}}_{\mathrm{K}_{\beta}}(N,\mu,\sigma)\) |
Ag | Ag | \(\ce{Ag}\) \(\text{L}_{\alpha}\) | 6 | \(G^{\mathrm{Ag}}_{\mathrm{L}_{\alpha}}(N,\mu,\sigma) + G^{\mathrm{Ag}}_{\mathrm{L}_{\beta}}(N,\mu,\sigma)\) |
Al | Al | \(\ce{Al}\) \(\text{K}_{\alpha}\) | 4 | \(G^{\mathrm{Al}}_{\mathrm{K}_{\alpha}}(N,\mu,\sigma)\) |
Cu | EPIC | \(\ce{Cu}\) \(\text{L}_{\alpha}\) | 2 | \(G^{\mathrm{Cu}}_{\mathrm{K}_{\alpha}}(N,\mu,\sigma)\) |
Cu | EPIC | \(\ce{O }\) \(\text{K}_{\alpha}\) | 0.9 | \(G^{\mathrm{O}}_{\mathrm{K}_{\alpha}}(N,\mu,\sigma) + G^{\mathrm{C}}_{\mathrm{K}_{\alpha}}(N,\mu,\sigma) + G^{\mathrm{Fe,esc}}_{L_{\alpha,\beta}}(N,\mu,\sigma) + G^{\mathrm{Ni}}_{\mathrm{L}_{\alpha,\beta}}(N,\mu,\sigma)\) |
C | EPIC | \(\ce{C }\) \(\text{K}_{\alpha}\) | 0.6 | \(G^{\mathrm{C}}_{\mathrm{K}_{\alpha}}(N,\mu,\sigma) + G^{\mathrm{O}}_{\mathrm{K}_{\alpha}}(N,\mu,\sigma)\) |
The exact implementation in use for both the gaussian and exponential gaussian:
- Gauss: https://github.com/Vindaar/seqmath/blob/master/src/seqmath/smath.nim#L997-L1009
- exponential Gauss: https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/calibration.nim#L182-L194
The fitting was performed both with MPFit (Levenberg Marquardt C implementation) as a comparison, but mainly using NLopt. Specifically the gradient based "Method of Moving Asymptotes" algorithm was used (NLopt provides a large number of different minimization / maximization algorithms to choose from) to perform maximum likelihood estimation written in the form of a poisson distributed log likelihood \(\chi^2\):
where \(n_i\) is the number of events in bin \(i\) and \(y_i\) the model prediction of events in bin \(i\).
The required gradient was calculated simply using the symmetric derivative. Other algorithms and minimization functions were tried, but this proved to be the most reliable. See the implementation: https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/calibration.nim#L131-L162
The fits to all spectra are shown below.
The positions of the main peaks both in case of the pixel as well as in the charge case should be linear in terms of the energy. This we can see in fig. 41 and 42
Finally we can calculate the energy resolution from the peak position and the width of the peaks. It should roughly follow a \(1/E\) dependency. The plot is shown in fig. 43. We can see that the energy resolution is slightly better for the pixel spectra than for the charge spectra, which is mostly expected, because the charge values have an additional uncertainty due to the statistical fluctuation of the gas amplification. In both cases the resolution is better than \(\SI{10}{\percent}\) for above \(\SI{3}{\keV}\) and goes up to \(\sim\SI{30}{\percent}\) at the lowest measured energies.
5.1. Change CDL calculations to work run-by-run instead of by target/filter only
Points of note:
- we have established that the temperature variation is the main cause for the detector variations we see, both at CAST (and almost certainly, but not done direct plots of temperature vs gas gain) at the CDL
- the weather during CDL data taking was indeed very warm for February and sunny (> 10°C during the day in Feb!):
- the variations of gas gain vs. run number show a significant change during the week:
- the variations seen in the hit and charge spectra is much more massive than thought:
All of this implies that we really should perform all the spectrum fits by run instead of by target & filter type. The latter doesn't work as we have to make drop certain runs completely to get decent looking data.
Note: the main 'difficulty' is the fact that we currently have a
hardcoded set of charges in the data for the likelihood reference
distribution inputs. Of course if we do it by run, the charges need
to be different by run. This however is useful, as it allows us to
fully get rid of the annoying hardcoded charges in the first
place. Instead we will write the charge bounds into ingridDatabase
/
the calibration-cdl*.h5
file and read it from there by run!
[X]
implement by run histograms of all InGrid properties intocdl_spectrum_creation
based on the cut data! -> these show clearly that the properties are fortunately not correlated with the gas gain! :rocket:
6. Implement vetos for likelihood
For some preliminary results regarding the veto power of the different detector features some reasonable cut values were chosen based on the different distributions. It is to be noted that these are not final and specifically are not based on a certain signal efficiency or similar! The main motivating factor for these values so far was having some numbers to write and test the implementation of the vetoes.
Relevant PR: https://github.com/Vindaar/TimepixAnalysis/pull/37 Contains both the veto code as well as the CDL code explained below for practical reasons.
6.1. FADC fall and rise time
IMPORTANT NOTE: For a continuation on this written during the writing process of my thesis (around 8.2. The below was written around 2019 for an SPSC update. Interestingly the distributions seen in these old plots cannot really be reproduced anymore by me. I don't quite understand what's going on yet, but it's of course possible that one of the many changes we made over the years fixed some issue there (maybe even just the pedestal from data calculation?).
) see sec.UPDATE:
As it turns out after having studied this all a bit more and looked into the implementation as well, the old FADC veto application not only used weird values (which may have been correct based on how we looked at the data back then, who knows), but way more importantly the implementation was broken! The FADC veto was never correctly applied!Based on the fall and rise time distributions
the following cuts were chosen for fall time:
const cutFallLow = 400'u16 # in 1 GHz clock cycles const cutFallHigh = 600'u16 # in 1 GHz clock cycles
and for the rise time:
const cutRiseLow = 40'u16 # in 1 GHz clock cycles const cutRiseHigh = 130'u16 # in 1 GHz clock cycles
Application of this veto yields the following improvement for the gold region:
and over the whole chip:
That is, a marginal improvement. This is to be expected, if the interpretation of the fall and rise time distributions plots is such that the main peak visible for the calibration data actually correponds to well behaved X-rays whereas the tails correspond to background contamination, since this is already what the likelihood method is very efficient at. All "easily" cutable events have already been removed.
Given that all X-rays should correspond to a roughly spherical charge distribution entering the grid holes, the rise time distribution should, for a specific energy, be a peak around the perfectly spherical charge cloud with deviations based on the statistical nature of diffusion, which directly maps to the geometrical properties of the events seen on the InGrids, i.e. an event with larger deviation from the spherical case results in a longer / shorter rise time and also in a corresponding change in the eccentricity of said cluster. Although it has to be kept in mind that the FADC is sensitive to the axis orthogonal to the geometry as seen on the InGrid (so a streched rise time value does not necessarily correspond to a larger eccentricity in one event, but on average the same properties are seen in both methods).
6.2. Scinti vetoes
Similar to the FADC some cut values were chosen to act as a scintillator veto. Regarding the scintillators it is important to keep in mind that a real axion induced X-ray cannot ever trigger a scintillator. Thus, all events in which both the FADC triggered (i.e. our trigger to read out the scintillators in the first place and an event visible on the center chip) and a scintillator triggered are either a random coincidence or a physical coincidence. In the latter case we have a background event, which we want to cut away. Fortunately, the rate of random coincidences is very small, given the very short time scales under which physical coincidence can happen (\(\mathcal{O}(\SI{1.5}{\micro\second})\) as will be discussed below).
This can either be approximated by assuming a \(\cos^2\left(\theta\right)\) distribution for cosmics and taking into account the scintillator areas and rate of cosmics, or more easily by looking at a representative data run and considering the number of entries outside of the main distribution \(\numrange{0}{60}\) clock cycles. While we cannot be sure that events in the main peak are purely physical, we can be certain that above a certain threshold no physical coincidence can happen. So considering the region from \(\numrange{300}{4095}\) clock cycles @ \(\SI{40}{\mega\hertz}\) all events should be purely random.
Then we can estimate the rate of random events per second by considering the total open shutter time in which we can accept random triggers. The number of FADC tiggers minus the number of scintillator triggers in the main peak \numrange{0}{300} clock cycles:
is the number of possible instances, in which the scintillator can trigger. This gives us the time available for the scintillator to trigger \(t_{\text{shutter}}\):
The rate of random triggers can then be estimated to:
where \(N_{r, \text{scinti}}\) is just the real number of random triggers recorded in the given run.
- Total open shutter time: \(\SI{89.98}{\hour}\)
- Open shutter time w/ FADC triggers: \(\SI{5.62}{\hour}\)
Note that \(t_{\text{shutter}}\) is orders of magnitude smaller than the open shutter time with FADC triggers, due to us only being able to estimate from the 4095 clock cycles in which we can actually determine an individual trigger (and not even that technically. If there was a trigger at 4000 clock cycles before the FADC triggered and another at 500 clock cycles we will only be able to see the one at 500!), that is \(\SI{25}{\nano\second} \cdot 4095 = \SI{102.4}{\micro\second}\) for possibly up to \(\sim\SI{2.3}{\second}\) of open shutter!
Scinti | \(N_{\text{FADC}}\) | \(N_{\text{main}}\) | \(N_{p, \text{scinti}}\) | \(t_{\text{shutter}}\) / s | \(N_{r, \text{scinti}}\) | \(n\) |
SCS | 19640 | 412 | 19228 | 1.83 | 2 | 1.097 |
SCL | 19640 | 6762 | 12878 | 1.22 | 79 | 64.67 |
At an estimated muon rate of
and a large veto scinti size of \(\sim\SI{0.33}{\meter\squared}\) this comes out to \(\SI{55.5}{\per\second}\), which is quite close to our estimation.
For the SCS the same estimation however yields a wildly unexpected result at \(\mathcal{O}(\SI{1}{\per\second})\), since the size of the SCS is \(\mathcal{O}(\SI{1}{\centi\meter})\). From the cosmic rate alone we would expect 0 events on average in the random case. Given the statistics of 2 events outside the main peak, the calculation is questionable though. In one of these two events the SCL saw a trigger 3 clock cycles away from SCS (341 vs. 338 clock cycles) which was most likely a muon traversing through both scintillators. Well.
Looking at the main peaks now:
Keep in mind that calibration data appearing in the two plots is due to contamination of calibration data sets with background events, essentially the random coincidences we talked about above, since the "background signal" can never be turned off. The low counts in the calibration distribution (so that is barely appears in the plot) is then mainly due to the extremely short total data taking duration, in which the shutter is open. Thus only very few background events are actually collected, because the \(^55\ce{Fe}\) source is a \(\mathcal{O}(\SI{15}{\mega\becquerel})\) source \(\sim \SI{40}{\centi\meter}\) from the detector window, but the detector has a dead time of \(\sim\SI{175}{\milli\second}\). This is an even more extreme case of the above, since the time for random events we consider here is only \(\SI{70}{clock\ cycles} = \SI{1.75}{\micro\second}\). Even for \(\mathcal{O}(1e5)\) events that amounts to less than a second. But given enough calibration in principle the signals visible in calibration dataset would reproduce the shape of the background dataset.
With the above in mind, we can safely say any trigger values below 300 clock cycles is reasonably surely related to a physical background event. These cuts
const scintLow = 0 const scintHigh = 300
result in the following background rate for the gold region and the whole chip, fig. 44:
Nevermind the whole chip fig. 45, which shows us some bug in our code, since we cannot veto events for which there physically cannot be a scintillator trigger (below \(\SI{1.3}{\kilo\electronvolt}\)). Let's just ignore that and investigate in due time… :) Good thing that's barely visible on the log plot!
6.2.1. Why are the scintillator counts so large in the first place?
Looking at the distributions of the scintillator counts above - and keeping in mind that the clock cycles correspond to a \(\SI{40}{\mega \hertz}\) clock - one might wonder why the values are so large in the first place.
This is easily explained by considering the gaseous properties at play here. First consider the SCS in fig. 46.
The first thing to highlight is where different times for two orthogonal muons come from. On average we expect a muon to deposit \(\sim\SI{2.67}{\keV\per\cm}\) of energy along its path through our Argon/Isobutane (97.7/2.3) gas mixture, resulting in \(\sim\SI{8}{\keV}\) deposited for an orthogonal muon. At the same time the FADC needs to collect about \(\sim\SI{1.3}{\keV}\) of charge equivalent before it can trigger.
Now if two muons have a different average ionization (since it's a statistical process), the amount of length equivalent that has to drift onto the grid to accumulate enough charge for the FADC will be different. This leads to a wider distribution of clock cycles.
Taking an average muon and the aforementioned trigger threshold, an equivalent of \(\SI{0.4875}{\cm}\) of track length has to be accumulated for the FADC to trigger. Given a drift velocity at our typical HV settings and gas mixture of \(\sim\SI{2}{\cm\per\micro\second}\) leads to an equivalent time of:
Given the clock frequency of \(\SI{40}{\mega\hertz}\) this amounts to:
The peak of the real distribution is rather at around 20 clock cycles. This is probably due to an inherent delay in the signal processing (I assume there will only really be offsets in delay, rather than unexpected non-linear behaviors?).
At around 60 clock cycles (= 1.5 µs) the whole track has drifted to the chip, assuming it is perfectly orthogonal. The size of the SiPM allows for shallow angles, which should explain the tail to ~ 70 clock cycles.
Thus, the edge at around 60 clock cycles must correspond to a deposited energy of around 1.3 keV (because the FADC triggered only after all the charge has drifted onto the grid).
The question then is why the distribution is almost flat (assuming the 20 ck peak is the 8 keV peak). This means that we have almost as many other orthogonal events with much lower energy.
Now consider the SCL in fig. 47.
In case of the SCL we see a much flatter distribution. This matches perfectly with the explanation above, except that the tracks on average come from above and drift to the readout plane parallel to the readout plane. Since the rate of cosmics is uniform along the detector volume we expect the same number of muons close to the readout plane as at a distance of \(\sim\SI{3}{\cm}\). The cut off then is again corresponding to the cathode end of the detector. A larger number of clock cycles would correspond to muons passing in front of the X-ray window.
6.3. Septem veto
Using the surrounding 6 InGrid as a veto is slightly more complicated. For a start, since we mostly restrict our analysis to the gold region (inner \(\SI{4.5}{\milli\meter}\) square of the chip), the septem board will not be of much help, because all events with their centers within the gold region either are obvious tracks (vetoed by the likelihood method) or do not extend onto the outer chips. However, one of the reasons we choose the gold region in the first place (aside from the axion image being centered within that region) is the stark increase in background towards the edges and especially corners of the chip.
Take the following heatmap fig. 48 of the cluster center positions, which illustrates it perfectly:
We can see that we have barely any background in the gold region ( \(\mathcal{O}(\SI{1e-5}{\cm^{-2}\second^{-1}\keV^{-1}})\), see fig. 49), whereas the background for the whole chip is between \(\SIrange{0.01}{0.0001}{cm^{-2}.s^{-1}.keV^{-1}}\) (ref fig. 50).
The reason for the visible increase is mostly that the events are not fully contained on the chip if they are close to the edges and especially in the corners. Cutting of from an eccentric cluster can lead to a more spherical cluster increasing the chance to look like an X-ray.
To use the surrounding chips as a veto then, works along the following way. First we generate an artifical event, which incorporates the active pixels not only from a single chip, but from all chips in a single coordinate system. For simplicity we assume that all chips are not separated by any spacing. So a real event like fig. 51 is reduced to a septemevent fig. 52:
The no spacing event displays are created with ./../../CastData/ExternCode/TimepixAnalysis/Tests/tpaPlusGgplot.nim (as of commit 16235d917325502a29eadc9c38d932a734d7b095 of TPA it produces the same plot as shown above). As can be seen the event number is \(\num{4}\). The data is from run \(\num{240}\) of the Run-3 dataset from the background data, i.e.: ./../../../../mnt/1TB/CAST/2018_2/DataRuns/Run_240_181021-14-54/. To generate the required file, simply:
./raw_data_manipulation /mnt/1TB/CAST/2018_2/DataRuns/Run_240_181021-14-54/ --out=run_240.h5 --nofadc ./reconstruction run_240.h5 ./reconstruction run_240.h5 --only_charge
In the same way as done for the FADC and scintillator vetoes the septem veto starts from all events, which pass the likelihood method for the center chip (either in the gold region or on the whole chip). For these events the discussed no spacing septemevents are built by collecting all active pixels corresponding to the event the cluster that passes the likelihood method belongs to. The resulting large event is processed in the exact same way that a normal single chip event is processed. Clusters are calculated from the whole event and geometric properties calculated for each cluster. Finally the energy of each cluster is calculated and the likelihood method applied to each.
The septem veto then demands that no cluster derived from such a septemevent may look like an X-ray. This is a pessimistic cut, since it's possible that we have an X-ray like event in the corner of the center chip, which turns out to belong to some track covering the surrounding chip. But at the same time we have a real X-ray on a different chip far away from this cluster. That real X-ray will pass the likelihood method. However, since we demand no cluster being X-ray like, this event will not be vetoed, despite the original cluster being now recognized as the background it really is.
This is done for simplicity in the implementation, since a smarter algorithm has to consider which cluster the original cluster that looked like an X-ray actually belongs to. This will be implemented in the future.
With this veto in mind, we get the following improved background rate fig 53 for the gold region and fig. 54 for the whole chip:
As expected we only really have an improvement outside of the gold region. This is also easily visible when considering the cluster centers of all those events on the whole chip, which pass both the likelihood method and the septem veto in fig. 55.
Note also that other than the FADC and scintillator vetoes, the septem veto works in all energy ranges, as it is not dependent on the FADC trigger.
6.3.1. TODO Septem veto rewrite
Talk about DBSCAN vs normal and create background rates
~/org/Mails/KlausUpdates/klaus_update_03_08_21/septemEvents_2017_logL_dbscan_eps_50.pdf
~/org/Mails/KlausUpdates/klaus_update_03_08_21/septemEvents_2017_logL_dbscan_eps_65_w_lines.pdf
(possibly) re-run with 65 and create background rate plot, this one is a comparison of 65 w/ some 2017 or so background rate.
6.3.2. Additional veto using lines through cluster centers
By performing a check on the lines along the long axes of clusters, we can compute the distance between those lines and the original cluster centers of the cluster passing the logL cut.
Then, if that distance is small enough (maybe 3*RMS), we can veto those clusters, as it seems likely that the track is actually of the same origin, with just a relatively long distance without ionization.
Implemented in likelihood.nim
now.
Example in fig. 56 that shows the veto working as intended.
6.3.3. DONE Investigate BAD clustering of the default clustering algorithm in some cases
For example 57 shows a case of the default clustering algorithm w/ 65 pix search radius, in which the clustering is utterly broken.
There are maybe 1 in 20 events that look like this!
NOTE: could this be due to some data ordering issues? I don't think so, but need to investigate that event.
TODO:
- extract the raw data of that cluster and run it through the simple cluster finder
UPDATE: aes
for the coloring, which leads to a bunch of different
clusters 'being found'.
Why it exactly happens I'm not sure, but for now it doesn't matter too much.
UPDATE 2: septemFrame
variable. We first use it to fill the pixels for the
clustering etc. and then reuse it to assign the cluster IDs. The
clustering works, but sometimes there are less pixels than originally
in the event, as they are part of no real cluster (less than min
number of pixels in a cluster). In this case there remain elements in
the septemFrame
(just a seq[tuple]
) that still contain ToT values.
6.3.4. TODO add logic for sparks checks
We might want to add a veto check that throws out events that contain sparks or highly ionizing events on an outer chip.
For example in fig. 58 we see a big spark on the 6th chip. In this case the few pixels on the central chip are quite likely some effect from that.
6.3.5. DONE Debug septem veto background rate
UPDATE:
The summary of the whole mess below is as follows:- the exact background rate from December cannot be reproduced
- there were multiple subtle bugs in the septem veto & the line veto
- there was a subtle bug in the mapping of septem pixels to single chip pixels (mainly affecting the line veto)
- the
crAll
case ininRegion
was broken, leading to the line veto effectively vetoing everything outside the gold region - probably more I forgot
Between the commits of sometime end of 2021 (reference commit:
9e841fa56091e0338e034503b916475f8bf145be
and now:
83445319bada0f9eef35c48527946c20ac21a5d0
there seems to have been some regression in the performance of the
septem veto & the line veto.
I'm still not 100% sure that the "old" commit referenced here does actually produce the correct result either.
The thing is for sure though: The background rate as shown in fig: and the clusters contained in the likelihood output files used in the limit calculation, namely:
~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/lhood_2017_all_chip_septem_dbscan.h5
~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/lhood_2018_all_chip_septem_dbscan.h5
show only about 9900 clusters over the whole data taking campaign.
This is not!! reproducible on the current code base!
I looked into the number of clusters passing the septem veto including line veto on the old and new code (by adding some file output). For the following command on the old code:
./likelihood /mnt/1TB/CAST/2018_2/DataRuns2018_Reco.h5 \ --h5out ~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/IAXO_TDR/lhood_2018_test_old2.h5 \ --altCdlFile /mnt/1TB/CAST/CDL_2019/calibration-cdl-2018.h5 \ --altRefFile /mnt/1TB/CAST/CDL_2019/XrayReferenceFile2018.h5 \ --cdlYear=2018 --region=crGold --septemveto --plotSeptem
we get the following output file:
Run: 297 Passed indices before septem veto 19 Passed indices after septem veto 8 Run: 242 Passed indices before septem veto 9 Passed indices after septem veto 6 Run: 256 Passed indices before septem veto 9 Passed indices after septem veto 7 Run: 268 Passed indices before septem veto 4 Passed indices after septem veto 2 Run: 281 Passed indices before septem veto 16 Passed indices after septem veto 8 Run: 272 Passed indices before septem veto 18 Passed indices after septem veto 8 Run: 274 Passed indices before septem veto 14 Passed indices after septem veto 9 Run: 270 Passed indices before septem veto 8 Passed indices after septem veto 6 Run: 306 Passed indices before septem veto 2 Passed indices after septem veto 2 Run: 246 Passed indices before septem veto 2 Passed indices after septem veto 1 Run: 263 Passed indices before septem veto 6 Passed indices after septem veto 5 Run: 298 Passed indices before septem veto 11 Passed indices after septem veto 8 Run: 303 Passed indices before septem veto 7 Passed indices after septem veto 4 Run: 287 Passed indices before septem veto 2 Passed indices after septem veto 1 Run: 248 Passed indices before septem veto 5 Passed indices after septem veto 3 Run: 299 Passed indices before septem veto 3 Passed indices after septem veto 1 Run: 291 Passed indices before septem veto 9 Passed indices after septem veto 7 Run: 295 Passed indices before septem veto 6 Passed indices after septem veto 5 Run: 285 Passed indices before septem veto 6 Passed indices after septem veto 5 Run: 240 Passed indices before septem veto 3 Passed indices after septem veto 3 Run: 301 Passed indices before septem veto 13 Passed indices after septem veto 8 Run: 267 Passed indices before septem veto 1 Passed indices after septem veto 0 Run: 276 Passed indices before septem veto 26 Passed indices after septem veto 14 Run: 279 Passed indices before septem veto 10 Passed indices after septem veto 5 Run: 293 Passed indices before septem veto 10 Passed indices after septem veto 8 Run: 254 Passed indices before septem veto 6 Passed indices after septem veto 6 Run: 244 Passed indices before septem veto 5 Passed indices after septem veto 3 Run: 278 Passed indices before septem veto 7 Passed indices after septem veto 7 Run: 283 Passed indices before septem veto 17 Passed indices after septem veto 11 Run: 258 Passed indices before septem veto 7 Passed indices after septem veto 5 Run: 289 Passed indices before septem veto 8 Passed indices after septem veto 5 Run: 250 Passed indices before septem veto 8 Passed indices after septem veto 5 Run: 261 Passed indices before septem veto 20 Passed indices after septem veto 15 Run: 265 Passed indices before septem veto 15 Passed indices after septem veto 9
With the new code:
./likelihood /mnt/1TB/CAST/2018_2/DataRuns2018_Reco.h5 \ --h5out ~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/IAXO_TDR/lhood_2018_test_new2.h5 \ --altCdlFile /mnt/1TB/CAST/CDL_2019/calibration-cdl-2018.h5 \ --altRefFile /mnt/1TB/CAST/CDL_2019/XrayReferenceFile2018.h5 \ --cdlYear=2018 --region=crGold --septemveto --lineveto --plotSeptem
(note the new additional --lineveto
option!)
we get:
Run: 297 Passed indices before septem veto 19 Passed indices after septem veto 8 Run: 242 Passed indices before septem veto 9 Passed indices after septem veto 6 Run: 256 Passed indices before septem veto 9 Passed indices after septem veto 7 Run: 268 Passed indices before septem veto 4 Passed indices after septem veto 2 Run: 281 Passed indices before septem veto 16 Passed indices after septem veto 8 Run: 272 Passed indices before septem veto 18 Passed indices after septem veto 8 Run: 274 Passed indices before septem veto 14 Passed indices after septem veto 9 Run: 270 Passed indices before septem veto 8 Passed indices after septem veto 6 Run: 306 Passed indices before septem veto 2 Passed indices after septem veto 2 Run: 246 Passed indices before septem veto 2 Passed indices after septem veto 1 Run: 263 Passed indices before septem veto 6 Passed indices after septem veto 5 Run: 298 Passed indices before septem veto 11 Passed indices after septem veto 8 Run: 303 Passed indices before septem veto 7 Passed indices after septem veto 4 Run: 287 Passed indices before septem veto 2 Passed indices after septem veto 1 Run: 248 Passed indices before septem veto 5 Passed indices after septem veto 3 Run: 299 Passed indices before septem veto 3 Passed indices after septem veto 2 Run: 291 Passed indices before septem veto 9 Passed indices after septem veto 7 Run: 295 Passed indices before septem veto 6 Passed indices after septem veto 5 Run: 285 Passed indices before septem veto 6 Passed indices after septem veto 5 Run: 240 Passed indices before septem veto 3 Passed indices after septem veto 3 Run: 301 Passed indices before septem veto 13 Passed indices after septem veto 8 Run: 267 Passed indices before septem veto 1 Passed indices after septem veto 0 Run: 276 Passed indices before septem veto 26 Passed indices after septem veto 14 Run: 279 Passed indices before septem veto 10 Passed indices after septem veto 5 Run: 293 Passed indices before septem veto 10 Passed indices after septem veto 8 Run: 254 Passed indices before septem veto 6 Passed indices after septem veto 6 Run: 244 Passed indices before septem veto 5 Passed indices after septem veto 3 Run: 278 Passed indices before septem veto 7 Passed indices after septem veto 7 Run: 283 Passed indices before septem veto 17 Passed indices after septem veto 11 Run: 258 Passed indices before septem veto 7 Passed indices after septem veto 5 Run: 289 Passed indices before septem veto 8 Passed indices after septem veto 5 Run: 250 Passed indices before septem veto 8 Passed indices after septem veto 5 Run: 261 Passed indices before septem veto 20 Passed indices after septem veto 15 Run: 265 Passed indices before septem veto 15 Passed indices after septem veto 10
There are differences for runs 299 and 265:
# old Run: 299 Passed indices before septem veto 3 Passed indices after septem veto 1 # new Run: 299 Passed indices before septem veto 3 Passed indices after septem veto 2 # old Run: 265 Passed indices before septem veto 15 Passed indices after septem veto 9 # new Run: 265 Passed indices before septem veto 15 Passed indices after septem veto 10
So in each of these cases there is 1 more cluster passing in the new code base.
This is a start.
The file: contains all septem event displays of the old code base.
Look at the events of runs 299 and 265 and whether they pass or not!
For the new code base the equivalent is:
In particular of interest is the difference of run 299 event 6369.
Two things:
- both code bases actually reconstruct the center cluster as part of the cluster track to the left
- the old code doesn't know about the cluster in the top right and bottom left chips! Something is wrong in the old code about either the plotting (possible due to data assignment) or due to data reading!
However: looking at the passed
and lineVetoRejected
title
elements of each of these runs in the new plots, shows that we count
the same number of clusters as in the old code!! So something is
wrong about the exclusion logic!
UPDATE: lineVetoRejected
commit that I did. So this works as expected
now!
Next step: Do the same thing, but not only for the gold region, but for the whole chip! Will take a bit longer.
For now: Look at old code without line veto (commented out the line veto branch in old code): During cutting run 298 we got a KeyError from plotting:
tables.nim(233) raiseKeyError Error: unhandled exception: key not found: 128 [KeyError]
But we have the following data up to here:
Run: 297 Passed indices before septem veto 774 Passed indices after septem veto 14 Run: 242 Passed indices before septem veto 447 Passed indices after septem veto 13 Run: 256 Passed indices before septem veto 797 Passed indices after septem veto 13 Run: 268 Passed indices before septem veto 180 Passed indices after septem veto 4 Run: 281 Passed indices before septem veto 834 Passed indices after septem veto 17 Run: 272 Passed indices before septem veto 1176 Passed indices after septem veto 16 Run: 274 Passed indices before septem veto 1207 Passed indices after septem veto 15 Run: 270 Passed indices before septem veto 846 Passed indices after septem veto 10 Run: 306 Passed indices before septem veto 81 Passed indices after septem veto 2 Run: 246 Passed indices before septem veto 309 Passed indices after septem veto 1 Run: 263 Passed indices before septem veto 307 Passed indices after septem veto 6 Run: 298 Passed indices before septem veto 607
This is definitely enough to compare with the new code. Unfortunately it means we cannot look at the cluster positions right now. Need to rerun without plotting for that. First the equivalent for new code and then comparing events by event display.
The passing indices for the new code:
Run: 297 Passed indices before septem veto 774 Passed indices after septem veto 114 Run: 242 Passed indices before septem veto 447 Passed indices after septem veto 73 Run: 256 Passed indices before septem veto 797 Passed indices after septem veto 123 Run: 268 Passed indices before septem veto 180 Passed indices after septem veto 35 Run: 281 Passed indices before septem veto 834 Passed indices after septem veto 152 Run: 272 Passed indices before septem veto 1176 Passed indices after septem veto 195 Run: 274 Passed indices before septem veto 1207 Passed indices after septem veto 176 Run: 270 Passed indices before septem veto 846 Passed indices after septem veto 137 Run: 306 Passed indices before septem veto 81 Passed indices after septem veto 15 Run: 246 Passed indices before septem veto 309 Passed indices after septem veto 45 Run: 263 Passed indices before septem veto 307 Passed indices after septem veto 49 Run: 298 Passed indices before septem veto 607
Note: The same run 298 produces the same KeyError on the new code as well!
Looking into the comparison of run 268 for old and new code now. Plots as comparison:
- ./../Figs/statusAndProgress/debugSeptemVeto/run_268_old_accidental_lineveto.pdf NOTE: file name adjusted after bug mentioned below found.
The reason for the difference is obvious quickly. Look at event 10542 in run 268 in both of these PDFs.
The reason the old code produces a background rate that is this much better, is plain and simply that it throws out events that it should not. So unfortunately it seems like a bug in the old code. :(
I still want to understand why that happens though. So check the old code explicitly for this event and see why it fails the logL cut suddenly.
UPDATE: The reason the old code produced such little background is
plainly that I messed up the passed = true
part of the code when
commenting out the lineVeto
stuff! Phew.
Checking again now with that fixed, if it reproduces the correct behavior. If so, will rerun the old code again with event displays looking at the passed indices. Indeed this fixed at least this event (10542) of the run. So rerun again now.
After the fix, we get these numbers for the passed indices:
Run: 297 Passed indices before septem veto 774 Passed indices after septem veto 141 Run: 242 Passed indices before septem veto 447 Passed indices after septem veto 86 Run: 256 Passed indices before septem veto 797 Passed indices after septem veto 150 Run: 268 Passed indices before septem veto 180 Passed indices after septem veto 42 Run: 281 Passed indices before septem veto 834 Passed indices after septem veto 178 Run: 272 Passed indices before septem veto 1176 Passed indices after septem veto 253 Run: 274 Passed indices before septem veto 1207 Passed indices after septem veto 218 Run: 270 Passed indices before septem veto 846 Passed indices after septem veto 175 Run: 306 Passed indices before septem veto 81 Passed indices after septem veto 18 Run: 246 Passed indices before septem veto 309 Passed indices after septem veto 56 Run: 263 Passed indices before septem veto 307 Passed indices after septem veto 56 Run: 298 Passed indices before septem veto 607
So comparing the numbers to the new code, we now actually get a more events in the old code!
Comparing the event displays again for run 268 (due to smaller number of events):
- (same file as above)
Look at event 16529 in this run 268.
The reason the old code removes less is the bug that was fixed
yesterday in the new code:
If there is a cluster on an outer chip, which passes the logL cut,
it causes the passed = true
to be set!
So: From here, we'll rerun both the old and new code without plotting to generate output files that we can plot (background and clusters).
The resulting indices from the old code without lineveto:
Run: 297 Passed indices before septem veto 774 Passed indices after septem veto 141 Run: 242 Passed indices before septem veto 447 Passed indices after septem veto 86 Run: 256 Passed indices before septem veto 797 Passed indices after septem veto 150 Run: 268 Passed indices before septem veto 180 Passed indices after septem veto 42 Run: 281 Passed indices before septem veto 834 Passed indices after septem veto 178 Run: 272 Passed indices before septem veto 1176 Passed indices after septem veto 253 Run: 274 Passed indices before septem veto 1207 Passed indices after septem veto 218 Run: 270 Passed indices before septem veto 846 Passed indices after septem veto 175 Run: 306 Passed indices before septem veto 81 Passed indices after septem veto 18 Run: 246 Passed indices before septem veto 309 Passed indices after septem veto 56 Run: 263 Passed indices before septem veto 307 Passed indices after septem veto 56 Run: 298 Passed indices before septem veto 607 Passed indices after septem veto 128 Run: 303 Passed indices before septem veto 457 Passed indices after septem veto 91 Run: 287 Passed indices before septem veto 318 Passed indices after septem veto 69 Run: 248 Passed indices before septem veto 500 Passed indices after septem veto 98 Run: 299 Passed indices before septem veto 197 Passed indices after septem veto 36 Run: 291 Passed indices before septem veto 679 Passed indices after septem veto 124 Run: 295 Passed indices before septem veto 340 Passed indices after septem veto 64 Run: 285 Passed indices before septem veto 837 Passed indices after septem veto 177 Run: 240 Passed indices before septem veto 440 Passed indices after septem veto 91 Run: 301 Passed indices before septem veto 722 Passed indices after septem veto 150 Run: 267 Passed indices before septem veto 100 Passed indices after septem veto 24 Run: 276 Passed indices before septem veto 1842 Passed indices after septem veto 376 Run: 279 Passed indices before septem veto 889 Passed indices after septem veto 167 Run: 293 Passed indices before septem veto 941 Passed indices after septem veto 205 Run: 254 Passed indices before septem veto 499 Passed indices after septem veto 92 Run: 244 Passed indices before septem veto 319 Passed indices after septem veto 58 Run: 278 Passed indices before septem veto 320 Passed indices after septem veto 71 Run: 283 Passed indices before septem veto 1089 Passed indices after septem veto 212 Run: 258 Passed indices before septem veto 278 Passed indices after septem veto 62 Run: 289 Passed indices before septem veto 322 Passed indices after septem veto 62 Run: 250 Passed indices before septem veto 380 Passed indices after septem veto 72 Run: 261 Passed indices before septem veto 1095 Passed indices after septem veto 219 Run: 265 Passed indices before septem veto 916 Passed indices after septem veto 178
The clusters distributed on the chip:
The background rate:
Now redo the same with the new code.
The passed indices:
Run: 297 Passed indices before septem veto 774 Passed indices after septem veto 114 Run: 242 Passed indices before septem veto 447 Passed indices after septem veto 73 Run: 256 Passed indices before septem veto 797 Passed indices after septem veto 123 Run: 268 Passed indices before septem veto 180 Passed indices after septem veto 35 Run: 281 Passed indices before septem veto 834 Passed indices after septem veto 152 Run: 272 Passed indices before septem veto 1176 Passed indices after septem veto 195 Run: 274 Passed indices before septem veto 1207 Passed indices after septem veto 176 Run: 270 Passed indices before septem veto 846 Passed indices after septem veto 137 Run: 306 Passed indices before septem veto 81 Passed indices after septem veto 15 Run: 246 Passed indices before septem veto 309 Passed indices after septem veto 45 Run: 263 Passed indices before septem veto 307 Passed indices after septem veto 49 Run: 298 Passed indices before septem veto 607 Passed indices after septem veto 98 Run: 303 Passed indices before septem veto 457 Passed indices after septem veto 73 Run: 287 Passed indices before septem veto 318 Passed indices after septem veto 62 Run: 248 Passed indices before septem veto 500 Passed indices after septem veto 80 Run: 299 Passed indices before septem veto 197 Passed indices after septem veto 32 Run: 291 Passed indices before septem veto 679 Passed indices after septem veto 97 Run: 295 Passed indices before septem veto 340 Passed indices after septem veto 58 Run: 285 Passed indices before septem veto 837 Passed indices after septem veto 133 Run: 240 Passed indices before septem veto 440 Passed indices after septem veto 78 Run: 301 Passed indices before septem veto 722 Passed indices after septem veto 120 Run: 267 Passed indices before septem veto 100 Passed indices after septem veto 17 Run: 276 Passed indices before septem veto 1842 Passed indices after septem veto 296 Run: 279 Passed indices before septem veto 889 Passed indices after septem veto 134 Run: 293 Passed indices before septem veto 941 Passed indices after septem veto 166 Run: 254 Passed indices before septem veto 499 Passed indices after septem veto 79 Run: 244 Passed indices before septem veto 319 Passed indices after septem veto 50 Run: 278 Passed indices before septem veto 320 Passed indices after septem veto 61 Run: 283 Passed indices before septem veto 1089 Passed indices after septem veto 166 Run: 258 Passed indices before septem veto 278 Passed indices after septem veto 55 Run: 289 Passed indices before septem veto 322 Passed indices after septem veto 48 Run: 250 Passed indices before septem veto 380 Passed indices after septem veto 56 Run: 261 Passed indices before septem veto 1095 Passed indices after septem veto 178 Run: 265 Passed indices before septem veto 916 Passed indices after septem veto 141
The cluster distribution is found in:
And the background rate:
Comparing these two background rates, we see that the background is lower than with the old code!
This is much more visible when comparing all the clusters. We do indeed have almost 1000 clusters less in this case!
The next step is to also apply the likelihood cut on both the 2017 and 2018 data & also use the line cut to see if we can actually reproduce the following background rate: .
First though, we check if we can find the exact files & command to reproduce that file:
Looking into the zsh
history:
: 1640019890:0;hdfview /tmp/lhood_2018_septemveto.h5. : 1640019897:0;hdfview /tmp/lhood_2017_septemveto.h5 : 1640019986:0;./likelihood /mnt/1TB/CAST/2017/DataRuns2017_Reco.h5 --h5out /tmp/lhood_2017_septemveto_testing.h5 --altCdlFile /mnt/1TB/CAST/CDL_2019/calibration-cdl-2018.h5 --altRefFile /mnt/1TB/CAST/CDL_2019/XrayReferenceFile2018.h5 --cdlYear=2018 --region=crGold --septemveto : 1640020217:0;hdfview /tmp/lhood_2017_septemveto_testing.h5 : 1640021131:0;nim r tests/tgroups.nim : 1640021511:0;./plotBackgroundRate /tmp/lhood_2017_septemveto.h5 /tmp/lhood_2018_septemveto.h5 --title="GridPix background rate based on 2017/18 data at CAST" : 1640022228:0;./likelihood /mnt/1TB/CAST/2018/DataRuns2018_Reco.h5 --h5out /tmp/lhood_2018_septemveto.h5 --altCdlFile /mnt/1TB/CAST/CDL_2019/calibration-cdl-2018.h5 --altRefFile /mnt/1TB/CAST/CDL_2019/XrayReferenceFile2018.h5 --cdlYear=2018 --region=crGold --septemveto : 1640022305:0;hdfview /mnt/1TB/CAST/2018/DataRuns2018_Reco.h5 : 1640022317:0;cd /mnt/1TB/CAST : 1640022328:0;rm DataRuns2018_Reco.h5 : 1640022390:0;rm /tmp/lhood_2018_septemveto.h5 : 1640022467:0;nim r hello.nim : 1640022605:0;mkdir examples : 1640022725:0;nim r hello_README.nim : 1640023030:0;hdfview /tmp/lhood_2018_septemveto.h5 : 1640025908:0;./plotBackgroundRate /tmp/lhood_2017_septemveto.h5 /tmp/lhood_2018_septemveto.h5 : 1640028916:0;./plotBackgroundRate /tmp/lhood_2017_septemveto.h5 /tmp/lhood_2018_septemveto.h5 --separateFiles : 1640028919:0;evince plots/background_rate_2017_2018_show2014_false_separate_true.pdf : 1640032332:0;./plotBackgroundRate /tmp/lhood_2017_septemveto.h5 /tmp/lhood_2018_septemveto.h5 --combName bla --combYear 2018 : 1640033107:0;dragon background_rate_2017_2018_show2014_false_separate_false. : 1640034992:0;dragon background_rate_2017_2018_show2014_false_separate_false.pdf : 1640035057:0;mv background_rate_2017_2018_septemveto_gold_minorTicks.pdf : 1640035065:0;mv background_rate_2017_2018_show2014_false_separate_false.pdf background_rate_2017_2018_septemveto_gold_minorTicks.pdf : 1640035087:0;evince background_rate_2017_2018_show2014_false_separate_false.pdf : 1640035110:0;mv background_rate_2017_2018_show2014_false_separate_false.pdf background_rate_2017_2018_septemveto_gold_12ticks.pdf : 1640035121:0;dragon background_rate_2017_2018_septemveto_gold_12ticks.pdf background_rate_2017_2018_septemveto_gold_minorTicks.pdf : 1640035181:0;evince background_rate_2017_2018_septemveto_gold_minorTicks.pdf : 1640085508:0;./plotBackgroundRate /tmp/lhood_run3_sigEff_65.h5 ../../resources/LikelihoodFiles/lhood_2018_no_tracking.h5 --separateFiles : 1640085515:0;./plotBackgroundRate /tmp/lhood_2017_septemveto.h5 /tmp/lhood_2018_septemveto.h5 --combName bla --combYear 2018 --title "GridPix background rate based on CAST data in 2017/18" --useTeX : 1640088525:0;cp background_rate_2017_2018_septemveto_gold_12ticks.pdf ~/org/Figs/statusAndProgress/backgroundRates/
This is a bit fishy:
- The 12 ticks background rate was definitely created in the call at
1640085515
(second to last line) - the input files
/tmp/lhood_2017_septemveto.h5
and/tmp/lhood_2018_septemveto.h5
can be found to be created further above at1640022228
(or 2018), but not for 2017. At1640019986
we created the file with _testing suffix. - the 2018 file is removed before the
plotBackgroundRate
call
This probably implies the order is a bit weird / some history is
missing, as things were done asynchronously from different shells?
The last reference to a lhood_2017_septemveto.h5
is actually from
much earlier, namely:
: 1635345634:0;./likelihood /mnt/1TB/CAST/2017/DataRuns2017_Reco.h5 --h5out /tmp/lhood_2017_septemveto.h5 --altCdlFile /mnt/1TB/CAST/CDL_2019/calibration-cdl-2018.h5 --altRefFile /mnt/1TB/CAST/CDL_2019/XrayReferenceFile2018.h5 --cdlYear=2018 --region=--septemveto
Furthermore, the lhood*.h5
files used here do not exist anymore (no
reboot since the plots I think). There are such files in /tmp/
at
the time of writing this ( ), but the files do
not produce the same background rate.
What we can check, is what the date is the plot was created to narrow down the state of the code we were running:
From the plotting call above, the timestamp is
(decode-time (seconds-to-time 1640085515)) ;; C-u C-x C-e to insert result into buffer (35 18 12 21 12 2021 2 nil 3600)
So it was created on the 21st of December 2021.
The last commit before this date was:
185e9eceab204d2b400ed787bbd02ecf986af983 [geometry] fix severe pitch conversion bug
from Dec 14.
It is possible of course that the pitch conversion was precisely the reason for the wrong background? But at the same time we don't know what local state we ran with, i.e. whether there were local changes etc.
As a final thing, let's at least check whether the lhood*.h5
files
used back then were created only for the gold region or the full chip.
Going by the zsh history above, the argument was always --crGold
.
IMPORTANT: A big takeaways from all this is that we really need the git hash in the output of the likelihood H5 files and the vetoes & clustering algorithm settings used!
Thus, as a final test, let's rerun the code with the "old" code as we used (which was a commit from Jan 14) and see if we get the same result including the line veto, but only for the gold region.
Old code, gold region w/ line veto:
./likelihood /mnt/1TB/CAST/2018_2/DataRuns2018_Reco.h5 \ --h5out ~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/debugSeptemVeto/lhood_2018_crGold_old_septemveto_lineveto.h5 \ --altCdlFile /mnt/1TB/CAST/CDL_2019/calibration-cdl-2018.h5 \ --altRefFile /mnt/1TB/CAST/CDL_2019/XrayReferenceFile2018.h5 \ --cdlYear=2018 --region=crGold --septemveto
After this we'll run 2017 as well.
./likelihood /mnt/1TB/CAST/2017/DataRuns2017_Reco.h5 \ --h5out ~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/debugSeptemVeto/lhood_2017_crGold_old_septemveto_lineveto.h5 \ --altCdlFile /mnt/1TB/CAST/CDL_2019/calibration-cdl-2018.h5 \ --altRefFile /mnt/1TB/CAST/CDL_2019/XrayReferenceFile2018.h5 \ --cdlYear=2018 --region=crGold --septemveto
Using these output files to generate a background rate
./plotBackgroundRate ~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/debugSeptemVeto/lhood_2017_crGold_old_septemveto_lineveto.h5 \ ~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/debugSeptemVeto/lhood_2018_crGold_old_septemveto_lineveto.h5 \ --combName 2017/18 --combYear 2018 --region crGold
results in:
So, also not the background rate we got in December.
As a final check, I'd checkout the code from the commit mentioned above in December and see what happens if we do the same as this.
A theory might be the pitch conversion bug: in the commit from Dec 14, we only fixed it in one out of two places!
Running now:
./likelihood /mnt/1TB/CAST/2017/DataRuns2017_Reco.h5 --h5out \ ~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/debugSeptemVeto/lhood_2017_crGold_dec_14_2021_septemveto_lineveto.h5 \ --altCdlFile /mnt/1TB/CAST/CDL_2019/calibration-cdl-2018.h5 \ --altRefFile /mnt/1TB/CAST/CDL_2019/XrayReferenceFile2018.h5 \ --cdlYear=2018 --region=crGold --septemveto
ok, great. That code doesn't even run properly…
Tried another commit, which has the same issue.
At this point it's likely that something fishy was going on there.
As a sanity check, try again the current code with the line veto and gold only. The files:
~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/debugSeptemVeto/lhood_2017_crGold_new_septemveto_lineveto.h5
~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/debugSeptemVeto/lhood_2018_crGold_new_septemveto_lineveto.h5
giving the following background rate:
Comparing this background to the one using the old code actually shows a very nice improvement all across the board and in particular in the Argon peak at 3 keV.
The shape is similar to the 12ticks
plot from December last
year. Just a bit higher in the very low energy range.
As a final check, I'll now recreate the cluster maps for old & new code. For the old code without the line cut and for the new one with the line cut. The files:
~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/debugSeptemVeto/lhood_2017_crAll_new_septemveto_lineveto.h5
~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/debugSeptemVeto/lhood_2018_crAll_new_septemveto_lineveto.h5
Gives the following background clusters over the whole chip:
So: we have about 3500 clusters more than with the files that we use for the limit calculation!
Two questions:
- what are these clusters? Why are they not removed?
- is the background rate in the gold region also higher?
The background rate in the gold region is:
It's even worse then. Not only do we have more clusters in over the whole chip than our best case scenario (that we cannot reproduce), but also our background rate is worse if computed from the full chip logL file than from the gold region only. The only difference between these two cases should be the lineveto, as the "in region check" happens in the gold region in one and in the whole chip in the other case.
Let's extract clusters from each of the likelihood files and then see which clusters appear in what context.
UPDATE: inRegion
procedure for the crAll
case:
func inRegion*(centerX, centerY: float, region: ChipRegion): bool {.inline.} = # ... of crAll: # simply always return good result = true
This is the reason there are more clusters in the crAll
case than
a) we expect and more importantly b) the background rate is different
from crAll
to crGold
!
Of course not all coordinates are valid for crAll
!! Only those
that are actually on the freaking chip.
It effectively meant that with the change of crAll
on the "in region
check" for the line veto, the veto never did anything!
Let's change that and re-run the code again… :(
The output files:
~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/debugSeptemVeto/lhood_2017_crAll_new_septemveto_lineveto_fixed_inRegion.h5
~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/debugSeptemVeto/lhood_2018_crAll_new_septemveto_lineveto_fixed_inRegion.h5
The background clusters of the version with fixed inRegion
is:
We can see that finally we have a "good" number (i.e. expected number) of clusters again. ~9500 clusters is similar to the number we get from the files we use as input for the limit calculation at this point.
Looking at the background rate in the gold region for these files:
We can see that this background rate is still (!!!) higher than in
the direct crGold
case.
Need to pick up the extractCluster
tool again and compare the actual
clusters used in each of these two cases.
While we can in principle plot the clusters that pass directly, it
won't be very helpful by themselves. Better if we print out the
clusters of a single run that survive in the crAll
case within the
gold region and do the same with the direct crGold
file. Then just
get the event numbers and look at the plots using the --plotSeptem
option of likelihood.nim
.
Ideally we should refactor out the drawing logic to a standalone tool
that is imported in likelihood
, but all the additional information
is so tightly coupled to the veto logic, that it'd get ugly.
First call it for run 261 (relatively long, should give enough
mismatches between the files) on the crGold
file:
cd ~/CastData/ExternCode/TimepixAnalysis/Tools/. ./extractClusterInfo -f ~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/debugSeptemVeto/lhood_2018_crGold_new_septemveto_lineveto.h5 \ --region crGold --run 261
And now the same for the crAll
file:
extractClusterInfo -f ~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/debugSeptemVeto/lhood_2018_crAll_new_septemveto_lineveto_fixed_inRegion.h5 \
--region crGold --run 261
There we have it. One event is more in the crAll
case. Namely event
14867
of run 261
.
Let's look at it, call likelihood
with --plotSeptem
option.
./likelihood /mnt/1TB/CAST/2018_2/DataRuns2018_Reco.h5 --h5out \ /tmp/test_noworries.h5 --altCdlFile \ /mnt/1TB/CAST/CDL_2019/calibration-cdl-2018.h5 \ --altRefFile /mnt/1TB/CAST/CDL_2019/XrayReferenceFile2018.h5 \ --cdlYear=2018 --region=crAll --septemveto --lineveto --plotSeptem
The event is the following:
What the hell? How does not that pass in case we only look at crGold
?…
Let's create the plots for that case…
Run same command as above with --region crGold
.
Great, even making sure the correct region is used in
inRegionOfInterest
in likelihood.nim
, this event suddenly does
pass, even if we run it just with crGold
…
Guess it's time to rerun the likelihood
again, but this time only
on the gold region….
./likelihood /mnt/1TB/CAST/2018_2/DataRuns2018_Reco.h5 --h5out \ ~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/debugSeptemVeto/lhood_2018_crGold_new_septemveto_lineveto_fixed_inRegion.h5 \ --altCdlFile /mnt/1TB/CAST/CDL_2019/calibration-cdl-2018.h5 \ --altRefFile /mnt/1TB/CAST/CDL_2019/XrayReferenceFile2018.h5 \ --cdlYear=2018 --region=crGold --septemveto --lineveto
First, let's look at the same run 261 of the new output file:
extractClusterInfo -f ~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/debugSeptemVeto/lhood_2018_crGold_new_septemveto_lineveto_fixed_inRegion.h5 \
--region crGold --run 261
And indeed, now the same event is found here, 14867…
I assume the background rate will now be the same as in the crAll
case
cut to crGold
?
First checked the clusters in the gold region again:
basti at void in ~/CastData/ExternCode/TimepixAnalysis/Tools ツ ./extractClusterInfo -f ../resources/LikelihoodFiles/debugSeptemVeto/lhood_2018_crGold_new_septemveto_lineveto_fixed_inRegion.h5 --region crGold > crGold.txt basti at void in ~/CastData/ExternCode/TimepixAnalysis/Tools ツ ./extractClusterInfo -f ../resources/LikelihoodFiles/debugSeptemVeto/lhood_2018_crAll_new_septemveto_lineveto_fixed_inRegion.h5 --region crGold > crAll.txt basti at void in ~/CastData/ExternCode/TimepixAnalysis/Tools ツ diff crGold.txt crAll.txt 12a13 > (run: 263, event: 22755, cX: 9.488187499999999, cY: 6.219125) 27d27 < INFO: no events left in run number 267 for chip 3 56a57 > (run: 256, event: 30527, cX: 7.846476683937824, cY: 9.488069948186528) 108a110 > (run: 297, event: 62781, cX: 9.433578431372547, cY: 8.849428104575162) 128a131 > (run: 283, event: 94631, cX: 9.469747838616716, cY: 9.37306195965418) 202c205 < Found 200 clusters in region: crGold --- > Found 204 clusters in region: crGold
So they are still different by 4 events. Let's look at these….
Run 263.
The obvious thing looking at the coordinates of these clusters is that they are all very close to 9.5 in one coordinate. That is the cutoff of the gold region (4.5 to 9.5 mm). Does the filtering go weird somewhere?
The event in run 263 is: Looking at the title, we can see that the issue is the line veto. It seems like for these close clusters, somehow they are interpreted as "outside" the region of interest and thus they veto themselves.
From the debug output of likelihood
:
Cluster center: 23.5681875 and 20.299125 line veto?? false at energy ? 5.032889059014033 with log 5.60286264130613 and ut 11.10000000000002 for cluster: 0 f or run 263 and event 22755
Computing the cluster center from the given coordinates:
23.5681875 - 14 = 9.5681875 20.299125 - 14 = 6.299125
which is obviously outside the 9.5 region…
But the coordinates reported above were cX: 9.488187499999999, cY:
6.219125
So something is once again amiss. Are the septem coordinates simply
not computed correctly? One pixel off?
I have an idea what might be going on. Possibly the pixels reported by TOS start at 1 instead of 0. That would mean the pixel ⇒ Septem pixel conversion is off by 1 / 2 pixels.
Check with printXYDataset
, by just printing one run:
printXyDataset -f /mnt/1TB/CAST/2018_2/DataRuns2018_Reco.h5 --run 263 --chip 3 --dset "x" --reco
So, no. The pixel information does indeed start at 0…
Need to check where the center cluster position is computed in
likelihood
then.
Or rather, first let's check what applyPitchConversion
actually does
in these cases:
const NPIX = 256 const PITCH = 0.055 let TimepixSize = NPIX * PITCH func applyPitchConversion*[T: (float | SomeInteger)](x, y: T, npix: int): (float, float) = ## template which returns the converted positions on a Timepix ## pixel position --> position from center in mm ((float(npix) - float(x) - 0.5) * PITCH, (float(y) + 0.5) * PITCH) # first find boundary of gold region let s84 = applyPitchConversion(84, 127, NPIX) echo s84 # what's max echo applyPitchConversion(0, 0, NPIX) echo applyPitchConversion(255, 255, NPIX) let center84 = applyPitchConversion(256 + 84, 127, NPIX * 3) echo center84 echo "Convert to center: ", center84[0] - TimepixSize
So, from the code snippet above we learned the following:
- either the pixel pitch is not exactly 0.055 μm
- or the size of the Timepix is not 14 mm
I think the former is more likely, the real size is larger. Using that
size, TimepixSize
for the computation of the pixel position on the
center chip that corresponds to just inside of the gold region
(pixel 84; the computation is the same as the withSeptem
template!)
So, once we fix that in likelihood
, it should be finally correct.
Rerunning with crGold
to verify that the above event 22755 does
indeed now pass.
As we can see, the event does not pass now, as it shouldn't.
Final check, run likelihood
on full crGold
and compare output of
clusters with extractClusterInfo
.
extractClusterInfo -f ~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/debugSeptemVeto/lhood_2018_crGold_new_septemveto_lineveto_fixed_inRegion_fixed_timepixSize.h5 \
--region crGold --short
Indeed, we get the same number of clusters as in the crAll
case
now. Yay.
That final file is:
~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/debugSeptemVeto/lhood_2018_crGold_new_septemveto_lineveto_fixed_inRegion_fixed_timepixSize.h5
After that, we need to check how many clusters we get in the new
code using the line veto for the whole chip. We should hopefully end
up with < 10000 clusters over the whole center chip. For that we at
least have the likelihood files in the resources
directory as a reference.
6.3.6. TODO Create background cluster plot from H5 files used for limit as comparison
6.3.7. TODO Implement clustering & veto & git hash as attributes from likelihood!
6.4. Estimating the random coincidence rate of the septem & line veto [/]
UPDATE: ./../../phd/thesis.html for the currently up to date numbers. The resulting files are in ./../../phd/resources/estimateRandomCoinc/ produced ./../journal.html.
: We reran the code today after fixing the issues with the septem veto (clustering with real spacing instead of without and rotation angle for septem geometry / normal) and the numbers are changed a bit. See[ ]
NEED to explain that eccentricity line veto cutoff is not used, but tested. Also NEED to obviously give the numbers for both setups.
[ ]
NAME THE ABSOLUTE EFFICIENCIES OF EACH SETUP[ ]
IMPORTANT: The random coincidence we calculate here changes not only the dead time for the tracking time, but also for the background rate! As such we need to regulate both![ ]
REWRITE THIS! -> Important parts are that background rates are only interesting if one understands the associated efficiencies. So need to explain that. This part should become :noexport:, but a shortened simpler version of this should remain.
One potential issue with the septem and line veto is that the shutter times we ran with at CAST are very long (\(> \SI{2}{s}\)), but only the center chip is triggered by the FADC. This means that the outer chips can record cluster data that is not correlated to what the center chip sees. When applying one of these two vetoes the chance for random coincidence might be non negligible.
In order to estimate this we can create fake events from real clusters on the center chip and clusters for the outer chips using different events. This way we bootstrap a larger number of events than otherwise available and knowing that the geometric data cannot be correlated. Any vetoing in these cases therefore must be a random coincidence.
As the likelihood
tool already uses effectively an index to map the
cluster indices for each chip to their respective event number, we've
implemented this there (--estimateRandomCoinc
) by rewriting the
index.
It is a good idea to also run it together with the --plotseptem
option to actually see some events and verify with your own eyes that
the events are actually "correct" (i.e. not the original ones). You
will note that there are many events that "clearly" look as if the
bootstrapping is not working correctly, because they look way too much
as if they are "obviously correlated". To give yourself a better sense
that this is indeed just coincidence, you can run the tool with the
--estFixedEvents
option, which bootstraps events using a fixed
cluster in the center for each run. Checking out the event displays of
those is a convincing affair that unfortunately random coincidences
are even convincing to our own eyes.
likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/lhood_2018_crAll_80eff_septem_fake.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --septemveto --estimateRandomCoinc
which writes the file /tmp/septem_fake_veto.txt
, which for this case
is found
./../resources/septem_veto_random_coincidences/septem_veto_before_after_fake_events_septem_old.txt
(note: updated numbers from latest state of code is the same file
without _old
suffix)
Mean value of and fraction (from script in next section): File: /home/basti/org/resources/septemvetorandomcoincidences/septemvetobeforeafterfakeeventsseptem.txt Mean output = 1674.705882352941 Fraction of events left = 0.8373529411764704
From this file the method seems to remove typically a bit more than 300 out of 2000 bootstrapped fake events. This seems to imply a random coincidence rate of about 17% (or effectively a reduction of further 17% in efficiency / 17% increase in dead time).
Of course this does not even include the line veto, which will drop it further. Before we combine both of them, let's run it with the line veto alone:
likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/lhood_2018_crAll_80eff_line_fake.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --lineveto --estimateRandomCoinc
this results in: ./../resources/septem_veto_random_coincidences/septem_veto_before_after_fake_events_line.txt
Mean value of: File: /home/basti/org/resources/septemvetorandomcoincidences/septemvetobeforeafterfakeeventsline.txt Mean output = 1708.382352941177 Fraction of events left = 0.8541911764705882
And finally both together:
likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/lhood_2018_crAll_80eff_septem_line_fake.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --septemveto --lineveto --estimateRandomCoinc
which generated the following output: ./../resources/septem_veto_random_coincidences/septem_veto_before_after_fake_events_septem_line.txt
Mean value of: File: /home/basti/org/resources/septemvetorandomcoincidences/septemvetobeforeafterfakeeventsseptemline.txt Mean output = 1573.676470588235 Fraction of events left = 0.7868382352941178
This comes out to a fraction of 78.68% of the events left after running the vetoes on our bootstrapped fake events. Combining it with a software efficiency of ε = 80% the total combined efficiency then would be \(ε_\text{total} = 0.8 · 0.7868 = 0.629\), so about 63%.
Finally now let's prepare some event displays for the case of using
different center clusters and using the same ones. We run the
likelihood
tool with the --plotSeptem
option and stop the program
after we have enough plots.
In this context note the energy cut range for the --plotseptem
option (by default set to 5 keV), adjustable by the
PLOT_SEPTEM_E_CUTOFF
environment variable.
First with different center clusters:
PLOT_SEPTEM_E_CUTOFF=10.0 likelihood \ -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/dummy.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --septemveto --lineveto --estimateRandomCoinc --plotseptem
which are wrapped up using pdfunite
and stored in:
./Figs/background/estimateSeptemVetoRandomCoinc/fake_events_septem_line_veto_all_outer_events.pdf
and now with fixed clusters:
PLOT_SEPTEM_E_CUTOFF=10.0 likelihood \ -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/dummy.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --septemveto --lineveto --estimateRandomCoinc --estFixedEvent --plotseptem
(Note that the cluster that is chosen can be set using
SEPTEM_FAKE_FIXED_CLUSTER
to a different index, by default it just
uses 5
).
These events are here:
./Figs/background/estimateSeptemVetoRandomCoinc/fake_events_fixed_cluster_septem_line_veto_all_outer_events.pdf
Combining different options of the line veto and the eccentricity cut
for the line veto, as well as applying both the septem and the line
veto for real data as well as fake bootstrapped data we can make an
informed decision about the settings to use. At the same time get an
understanding for the real dead time we
introduce. Fig. 60
shows precisely such data. We can see that the fraction that passes
the veto setups (y axis) drops the further we go towards a low
eccentricity cut (x axis). For the real data (Real
suffix in the
legend) the drop is faster than for fake boostrapped data (Fake
suffix
in the legend) however, which means that we can use the lowest
eccentricity cut as we like (effectively disabling the cut at
\(ε_\text{cut} = 1.0\)). The exact choice between the purple / green
pair (line veto including all clusters, even the one containing the
original cluster) and the turquoise / blue pair (septem veto + line
veto with only those clusters that do not contain the original; those
are covered by the septem veto) is not entirely clear. Both will be
investigated for their effect on the expected limit. The important
point is that the fake data allows us to estimate the random
coincidence rate, which needs to be treated as an additional dead time
during background and solar tracking time. A lower background may or
may not be beneficial, compared to a higher dead time.
6.4.1. TODO Rewrite the whole estimation to a proper program [/]
IMPORTANT
That program should call likelihood
alone, and likelihood
needs to
be rewritten such that it outputs the septem random coincidence (or
real removal) into the H5 output file. Maybe just add a type that
stores the information which we serialize.
With the serialized info about the veto settings we can then
reconstruct in code what is what.
Or possibly better if the output is written to a separate file such that we don't store all the cluster data.
Anyhow, then rewrite the code snippet in the section below that prints the information about the random coincidence rates and creates the plot.
6.4.2. Run a whole bunch more cases
The below is running now
. Still running as of , damn this is slow.[X]
INVESTIGATE PERFORMANCE AFTER IT'S DONE[ ]
We should be able to run ~4 (depending on choice even more) in parallel, no?
import shell, strutils, os #let vals = @[1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0] #let vals = @[1.0, 1.1] let vals = @[1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0] #let vetoes = @["--lineveto", "--lineveto --estimateRandomCoinc"] let vetoes = @["--septemveto --lineveto", "--septemveto --lineveto --estimateRandomCoinc"] ## XXX: ADD CODE DIFFERENTIATING SEPTEM + LINE & LINE ONLY IN NAMES AS WELL! #const lineVeto = "lvRegular" const lineVeto = "lvRegularNoHLC" let cmd = """ LINE_VETO_KIND=$# \ ECC_LINE_VETO_CUT=$# \ USE_REAL_LAYOUT=true \ likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /t/lhood_2018_crAll_80eff_septem_line_ecc_cutoff_$#_$#_real_layout$#.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 $# """ proc toName(veto: string): string = (if "estimateRandomCoinc" in veto: "_fake_events" else: "") for val in vals: for veto in vetoes: let final = cmd % [ lineVeto, $val, $val, lineVeto, toName(veto), $veto ] let (res, err) = shellVerbose: one: cd /tmp ($final) writeFile("/tmp/logL_output_septem_line_ecc_cutoff_$#_$#_real_layout$#.txt" % [$val, lineVeto, toName(veto)], res) let outpath = "/home/basti/org/resources/septem_veto_random_coincidences/autoGen/" let outfile = "septem_veto_before_after_septem_line_ecc_cutoff_$#_$#_real_layout$#.txt" % [$val, lineVeto, toName(veto)] copyFile("/tmp/septem_veto_before_after.txt", outpath / outfile) removeFile("/tmp/septem_veto_before_after.txt") # remove file to not append more and more to file
It has finally finished some time before
. Holy moly how slow.
We will keep the generated lhood_*
and logL_output_*
files in
./../resources/septem_veto_random_coincidences/autoGen/ together
with the septem_veto_befor_after_*
files.
See the code in one of the next sections for the 'analysis' of this dataset.
[X]
RERUN THE ABOVE AFTER LINE VETO BUGFIX & PERF IMPROVEMENTS[ ]
Rerun everything in check for thesis final.
6.4.3. Number of events removed in real usage
[ ]
MAYBE EXTEND CODE SNIPPET ABOVE TO ALLOW CHOOSING BETWEEN εcut ANALYSIS AND REAL FRACTIONS
As a reference let's quickly run the code also for the normal use case where we don't do any bootstrapping:
likelihood \ -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/dummy_real.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --septemveto
which results in ./../resources/septem_veto_random_coincidences/septem_veto_before_after_only_septem.txt
Next the line veto alone:
likelihood \ -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/dummy_real.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --lineveto
which results in: ./../resources/septem_veto_random_coincidences/septem_veto_before_after_only_line.txt
And finally both together:
likelihood \ -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/dummy_real_2.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --septemveto --lineveto
and this finally yields:
./../resources/septem_veto_random_coincidences/septem_veto_before_after_septem_line.txt
And further for reference let's compute the fake rate when only using the septem veto (as we have no eccentricity dependence, hence a single value):
likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/lhood_2018_crAll_80eff_septem_real_layout.h5 \ --region crAll \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --septemveto \ --estimateRandomCoinc
Run the line veto with new features:
- real septemboard layout
- eccentricity cut off for tracks participating (ecc > 1.6)
LINE_VETO_KIND=lvRegularNoHLC \ ECC_LINE_VETO_CUT=1.6 \ USE_REAL_LAYOUT=true \ likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/lhood_2018_crAll_80eff_line_ecc_cutof_1.6_real_layout.h5 \ --region crAll \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --lineveto
[ ]
WE SHOULD REALLY LOOK INTO RUNNING THE LINE VETO ONLY USING DIFFERENT ε CUTOFFS! -> Then compare the real application with the fake bootstrap application and see if there is a sweet spot in terms of S/N.
Let's calculate the fraction in all cases:
import strutils let files = @["/home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_only_septem.txt", "/home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_only_line.txt", "/home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_septem_line.txt", "/home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_fake_events_septem.txt", "/home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_fake_events_line.txt", "/home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_fake_events_septem_line.txt"] proc parseFile(fname: string): float = var lines = fname.readFile.strip.splitLines() var line = 0 var numRuns = 0 var outputs = 0 # if file has more than 68 lines, remove everything before, as that means # those were from a previous run if lines.len > 68: lines = lines[^68 .. ^1] doAssert lines.len == 68 while line < lines.len: if lines[line].len == 0: break # parse input # `Septem events before: 1069 (L,F) = (false, false)` let input = lines[line].split(':')[1].strip.split()[0].parseInt # parse output # `Septem events after fake cut: 137` inc line let output = lines[line].split(':')[1].strip.parseInt result += output.float / input.float outputs += output inc numRuns inc line echo "\tMean output = ", outputs.float / numRuns.float result = result / numRuns.float # first the predefined files: for f in files: echo "File: ", f echo "\tFraction of events left = ", parseFile(f) # now all files in our eccentricity cut run directory const path = "/home/basti/org/resources/septem_veto_random_coincidences/autoGen/" import std / [os, parseutils, strutils] import ggplotnim proc parseEccentricityCutoff(f: string): float = let str = "ecc_cutoff_" let startIdx = find(f, str) + str.len var res = "" let stopIdx = parseUntil(f, res, until = "_", start = startIdx) echo res result = parseFloat(res) proc determineType(f: string): string = ## I'm sorry for this. :) if "only_line_ecc" in f: result.add "Line" elif "septem_line_ecc" in f: result.add "SeptemLine" else: doAssert false, "What? " & $f if "lvRegularNoHLC" in f: result.add "lvRegularNoHLC" elif "lvRegular" in f: result.add "lvRegular" else: # also lvRegularNoHLC, could use else above, but clearer this way. Files result.add "lvRegularNoHLC" # without veto kind are older, therefore no HLC if "_fake_events.txt" in f: result.add "Fake" else: result.add "Real" var df = newDataFrame() # walk all files and determine the type for f in walkFiles(path / "septem_veto_before_after*.txt"): echo "File: ", f let frac = parseFile(f) let eccCut = parseEccentricityCutoff(f) let typ = determineType(f) echo "\tFraction of events left = ", frac df.add toDf({"Type" : typ, "ε_cut" : eccCut, "FractionPass" : frac}) df.writeCsv("/home/basti/org/resources/septem_line_random_coincidences_ecc_cut.csv", precision = 8) ggplot(df, aes("ε_cut", "FractionPass", color = "Type")) + geom_point() + ggtitle("Fraction of events passing line veto based on ε cutoff") + margin(right = 9) + ggsave("Figs/background/estimateSeptemVetoRandomCoinc/fraction_passing_line_veto_ecc_cut.pdf", width = 800, height = 480) #ggsave("/tmp/fraction_passing_line_veto_ecc_cut.pdf", width = 800, height = 480) ## XXX: we probably don't need the following plot for the real data, as the eccentricity ## cut does not cause anything to get worse at lower values. Real improvement better than ## fake coincidence rate. #df = df.spread("Type", "FractionPass").mutate(f{float: "Ratio" ~ `Real` / `Fake`}) #ggplot(df, aes("ε_cut", "Ratio")) + # geom_point() + # ggtitle("Ratio of fraction of events passing line veto real/fake based on ε cutoff") + # #ggsave("Figs/background/estimateSeptemVetoRandomCoinc/ratio_real_fake_fraction_passing_line_veto_ecc_cut.pdf") # ggsave("/tmp/ratio_real_fake_fraction_passing_line_veto_ecc_cut.pdf")
(about the first set of files) So about 14.8% in the only septem case and 9.9% in the septem + line veto case.
[ ]
MOVE BELOW TO PROPER THESIS PART!
(about the ε cut)
- Investigate significantly lower fake event fraction passing
UPDATE:
The numbers visible in the plot are MUCH LOWER than what we had previously after implementing the line veto alone!!
Let's run with the equivalent of the old parameters:
LINE_VETO_KIND=lvRegular \ ECC_LINE_VETO_CUT=1.0 \ USE_REAL_LAYOUT=false \ likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /t/lhood_2018_crAll_80eff_line_ecc_cutof_1.0_tight_layout_lvRegular.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 --lineveto --estimateRandomCoinc
-> As it turns out this was a bug in our logic that decides which cluster is of interest to the line veto. We accidentally always deemed it interesting, if the original cluster was on its own… Fixed now.
6.5. On the line veto without septem veto
When dealing with the line veto without the septem veto there are multiple questions that come up of course.
First of all, what is the cluster that you're actually targeting with our 'line'? The original cluster (OC) that passed lnL or a hypothetical larger cluster that was found during the septem event reconstruction (HLC).
Assuming the former, the next question is whether we want to allow an HLC to veto our OC? In a naive implementation this is precisely what's happening, because in the regular use case of septem veto + line veto, the line veto would never have any effect anymore, as an HLC would almost certainly be vetoed by the septem veto! But without the septem veto, this decision is fully up to the line veto and the question becomes relevant. (we will implement a switch, maybe based on an environment variable or flag)
In the latter case the tricky part is mainly just identifying the correct cluster which to test for in order to find its center. However, this needs to be implemented to avoid the HLC in the above mentioned case. With it done, we then have 3 different ways to do the line veto:
- 'regular' line veto. Every cluster checks the line to the center cluster. Without septem veto this includes HLC checking OC.
- 'regular without HLC' line veto: Lines check the OC, but the HLC is explicitly not considered.
- 'checking the HLC' line veto: In this case all clusters check the center of the HLC.
Thoughts on LvCheckHLC:
- The radii around the new HLC become so large that in practice this won't be a very good idea I think!
- The
lineVetoRejected
part of the title seems to be "true" in too many cases. What's going on here? See: for example "2882 and run 297" on page 31. Like huh? My first guess is that the distance calculation is off somehow? Similar page 33 and probably many more. Even worse is page 34: "event 30 and run 297"! -> Yeah, as it turns out the problem was just that ourinRegionOfInterest
check had become outdated due to our change of [ ]
Select example events for each of the 'line veto kinds' to demonstrate their differences.
OC: Original Cluster (passing lnL cut on center chip) HCL: Hypothetical Large Cluster (new cluster that OC is part of after septemboard reco) Regular: is an example event in which we see the "regular" line veto without using the septem veto. Things to note:
- the black circle shows the 'radius' of the OC, not the HLC
- the OC is actually part of a HLC
- because of this and because the HLC is a nice track, the event is vetoed, not by the green track, but by the HLC itself!
This wouldn't be a problem if we also used the septem veto, as this event would already be removed due to the septem veto! (More plots: )
Regular no HLC: The reference cluster to check for is still the regular OC with the same radius. And again the OC is part of an HLC. However, in contrast to the 'regular' case, this event is not vetoed. The green and purple clusters simply don't point at the black circle and the HLC itself is not considered here. This defines the 'regular no HLC' veto. is just an example of an event that proves the method works & a nice example of a cluster barely hitting the radius of the OC. On the other hand though this is also a good example for why we should have an eccentricity cut on those clusters that we use to check for lines! The green cluster in this second event is not even remotely eccentric enough and indeed is actually part of the orange track! (More plots: )
Check HLC cluster: Is an example event where we can see how ridiculous the "check HLC" veto kind can become. There is a very large cluster that the OC is actually part of (in red). But because of that the radius is SO LARGE that it even encapsulates a whole other cluster (that technically should ideally be part of the 'lower' of the tracks!). For this reason I don't think this method is particularly useful. In other events of course it looks more reasonable, but still. There probably isn't a good way to make this work reliably. In any case though, for events that are significant in size, they would almost certainly never pass any lnL cuts anyhow. (More plots: )
The following is a broken event. THe purple cluster is not used for line veto. Why? /t/problemevent12435run297.pdf
[X]
Implement a cutoff for the eccentricity that a cluster must have in order to partake in the line veto. Currently this can only be set via an environment variable (ECC_LINE_VETO_CUT
). A good value is around the 1.4 - 1.6 range I think (anything that rules out most X-ray like clusters!)
6.5.1. Note on real septemboard spacing being important extended
is an example event that shows we need to introduce the correct chip
spacing for the line veto. For the septem veto it's not very
important, because the distance is way more important than the angle
of how things match up. But for the line veto it's essential, as can
be seen in that example (note that it uses lvRegularNoHLC
and no
septem veto, i.e. that's why the veto is false, despite the purple HLC of
course "hitting" the original cluster)
-> This has been implemented now. Activated (for now) via an
environment variable USE_REAL_LAYOUT
.
An example event for the spacing & the eccentricity cutoff is:
file:///home/basti/org/Figs/statusAndProgress/exampleEvents/example_event_with_line_spacing_and_ecc_cutoff.pdf
which was generated using:
LINE_VETO_KIND=lvRegularNoHLC \ ECC_LINE_VETO_CUT=1.6 \ USE_REAL_LAYOUT=true \ likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/lhood_2018_crAll_80eff_line.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --lineveto --plotseptem
and then just extract it from the /plots/septemEvents
directory. Note the definition of the environment variables like this!
6.5.2. Outdated: Estimation using subset of outer ring events
The text here was written when we were still bootstrapping events only from the subset of event numbers that actually have a cluster that passes lnL on the center chip. This subset is of course biased even on the outer chip. Assuming that center clusters often come with activity on the outer chips, means there are less events representing those cases where there isn't even any activity in the center. This over represents activity on the outer chip.
likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/lhood_2018_crAll_80eff_septem_fake.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --septemveto --estimateRandomCoinc
which writes the file /tmp/septem_fake_veto.txt
, which for this case
is found
./../resources/septem_veto_random_coincidences/estimates_septem_veto_random_coincidences.txt
Mean value of: 1522.61764706.
From this file the method seems to remove typically a bit less than 500 out of 2000 bootstrapped fake events. This seems to imply a random coincidence rate of almost 25% (or effectively a reduction of further 25% in efficiency / 25% increase in dead time). Pretty scary stuff.
Of course this does not even include the line veto, which will drop it further. Let's run that:
likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/lhood_2018_crAll_80eff_septem_line_fake.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --septemveto --lineveto --estimateRandomCoinc
which generated the following output: ./../resources/septem_veto_random_coincidences/estimates_septem_line_veto_random_coincidences.txt
Mean value of: 1373.70588235.
This comes out to a fraction of 68.68% of the events left after running the vetoes on our bootstrapped fake events. Combining it with a software efficiency of ε = 80% the total combined efficiency then would be \(ε_\text{total} = 0.8 · 0.6868 = 0.5494\), so about 55%.
7. Application of CDL data to analysis
Relevant PR: https://github.com/Vindaar/TimepixAnalysis/pull/37
All calculations and plots above so far are done with the CDL data obtained in 2014. This imposes many uncertainties on those results and is one of the reasons the vetoes explained above were only implemented so far, but are not very refined yet. Before these shortcomings are adressed then, the new CDL data should be used as the basis for the likelihood method.
The idea behind using the CDL data as reference spectra is quite simple. One starts with the full spectrum of each target / filter combination of the data. From this two different "datasets" are created:
7.1. CDL calibration file
The CDL calibration file simply contains all reconstructed clusters from the CDL runs sorted by target / filter combinations.
The only addition to that is the calculation of the likelihood value dataset. For an explanation on this, see the 7.3 section below.
7.2. X-ray reference file
This file contains our reference spectra stored as histograms. We take each target / filter combination from the above file. Then we apply the following cuts:
- cluster center in silver region (circle around chip center with \(\SI{4.5}{\mm}\) radius)
- cut on transverse RMS, see below
- cut on length, see below
- cut on min number of pixels, at least 3
- cut on total charge, see below
where the latter 4 cuts depend on the energy. The full table is shown in tab. 14.
NOTE: Due to a bug in the implementation of the total charge calculation the charge values here are actually off by about a factor of 2! New values have yet to be calculated by redoing the CDL charge reconstruction and fits.
Target | Filter | HV / \si{\kV} | Qmin / \(e^-\) | Qmax / \(e^-\) | length / mm | rmsT,min | rmsT,max |
---|---|---|---|---|---|---|---|
Cu | Ni | 15 | \num{5.9e5} | \num{1.0e6} | 7.0 | 0.1 | 1.1 |
Mn | Cr | 12 | \num{3.5e5} | \num{6.0e5} | 7.0 | 0.1 | 1.1 |
Ti | Ti | 9 | \num{2.9e5} | \num{5.5e5} | 7.0 | 0.1 | 1.1 |
Ag | Ag | 6 | \num{2.0e5} | \num{4.0e5} | 7.0 | 0.1 | 1.1 |
Al | Al | 4 | \num{5.9e4} | \num{2.1e5} | 7.0 | 0.1 | 1.1 |
Cu | EPIC | 2 | \num{1.3e5} | \num{7.0e4} | 7.0 | 0.1 | 1.1 |
Cu | EPIC | 0.9 | \num{3.0e4} | \num{8.0e4} | 7.0 | 0.1 | 1.1 |
C | EPIC | 0.6 | \num{ 0.0} | \num{5.0e4} | 6.0 |
Target | Filter | HV / \si{\kV} | Qcenter / \(e^-\) | Qsigma / \(e^-\) | length / mm | rmsT,min | rmsT,max |
---|---|---|---|---|---|---|---|
Cu | Ni | 15 | \num{6.63e5} | \num{7.12e4} | 7.0 | 0.1 | 1.1 |
Mn | Cr | 12 | \num{4.92e5} | \num{5.96e4} | 7.0 | 0.1 | 1.1 |
Ti | Ti | 9 | \num{4.38e5} | \num{6.26e4} | 7.0 | 0.1 | 1.1 |
Ag | Ag | 6 | \num{2.90e5} | \num{4.65e4} | 7.0 | 0.1 | 1.1 |
Al | Al | 4 | \num{1.34e5} | \num{2.33e4} | 7.0 | 0.1 | 1.1 |
Cu | EPIC | 2 | \num{7.76e4} | \num{2.87e4} | 7.0 | 0.1 | 1.1 |
Cu | EPIC | 0.9 | \num{4.17e4} | \num{1.42e4} | 7.0 | 0.1 | 1.1 |
C | EPIC | 0.6 | \num{ 0.0} | \num{1.31e4} | 6.0 |
After these cuts are applied and all clusters are thrown out, which do not pass these cuts, histograms are calculated for all properties according to the binnings shown in tab. 11.
name | bins | min | max |
---|---|---|---|
skewnessLongitudinal | 100 | \num{-5.05} | \num{4.85} |
skewnessTransverse | 100 | \num{-5.05} | \num{4.85} |
rmsTransverse | 150 | \num{-0.0166667} | \num{4.95} |
eccentricity | 150 | \num{0.97} | \num{9.91} |
hits | 250 | \num{-0.5} | \num{497.5} |
kurtosisLongitudinal | 100 | \num{-5.05} | \num{4.85} |
kurtosisTransverse | 100 | \num{-5.05} | \num{4.85} |
length | 200 | \num{-0.05} | \num{19.85} |
width | 100 | \num{-0.05} | \num{9.85} |
rmsLongitudinal | 150 | \num{-0.0166667} | \num{4.95} |
lengthDivRmsTrans | 150 | \num{-0.1} | \num{29.7} |
rotationAngle | 100 | \num{-0.015708} | \num{3.09447} |
energyFromCharge | 100 | \num{-0.05} | \num{9.85} |
likelihood | 200 | \num{-40.125} | \num{9.625} |
fractionInTransverseRms | 100 | \num{-0.005} | \num{0.985} |
totalCharge | 200 | \num{-6250} | \num{2.48125e+06} |
7.3. Calculation of likelihood values
With both the calibration CDL file present and the X-ray reference file present, we can complete the process required to use the new CDL data for the analysis by calculating the likelihood values for all clusters found in the calibration CDL file.
This works according to the following idea:
- choose the correct energy bin for a cluster (its energy is calculated from the total charge) according to tab. 12 and get its X-ray reference histogram
- calculate log likelihood value for the clusters eccentricity under the reference spectrum
- add logL value for length / RMS transverse
- add logL value for fraction in transverse RMS
- invert value
where the likelihood is just calculated according to (ref: https://github.com/Vindaar/seqmath/blob/master/src/seqmath/smath.nim#L845-L867)
proc likelihood(hist: seq[float], val: float, bin_edges: seq[float]): float = let ind = bin_edges.lowerBound(val).int if ind < hist.len: result = hist[ind].float / hist.sum.float else: result = 0 proc logLikelihood(hist: seq[float], val: float, bin_edges: seq[float]): float = let lhood = likelihood(hist, val, bin_edges) if lhood <= 0: result = NegInf else: result = ln(lhood)
Target | Filter | HV / \si{\kV} | min Energy / \si{\keV} | max Energy / \si{\keV} |
---|---|---|---|---|
Cu | Ni | 15 | 6.9 | ∞ |
Mn | Cr | 12 | 4.9 | 6.9 |
Ti | Ti | 9 | 3.2 | 4.9 |
Ag | Ag | 6 | 2.1 | 3.2 |
Al | Al | 4 | 1.2 | 2.1 |
Cu | EPIC | 2 | 0.7 | 1.2 |
Cu | EPIC | 0.9 | 0.4 | 0.7 |
C | EPIC | 0.6 | 0.0 | 0.4 |
The result is a likelihood dataset for each target filter combination. This is now the foundation to determine the cut values on the logL values we wish to use for one energy bin. But for that we do not wish to use the raw likelihood dataset obviously. Instead we apply both the cuts previously mentioned which are used to generate the X-ray reference spectra tab. 14 and in addition some more cuts, which filter out some more unphysical single clusters, see tab. 15.
All clusters, which pass these combined cuts cuts are added to our likelihood distriution for each target filter combination.
These are then binned to 200 bins in a range from \(\numrange{0.0}{30.0}\) for the logL values. Finally the cut value is determine by demanding an \(\SI{80}{\percent}\) software efficiency. Our assumption is that for each target filter combination the distribution created by the listed cuts is essentially "background free". Then the signal efficiency is simply the ratio of accepted values divided by the total number of entries in the histogram. The actual calculation is:
proc determineCutValue(hist: seq[float], efficiency: float): int = var cur_eff = 0.0 last_eff = 0.0 let hist_sum = hist.sum.float while cur_eff < efficiency: inc result last_eff = cur_eff cur_eff = hist[0..result].sum.float / hist_sum
where the input is the described cleaned likelihood histogram and the result is the bin index corresponding to an \(\SI{80}{\percent}\) signal efficiency below the index (based on the fact that we accept all values smaller than the logL value corresponding to that index in the likelihood distribution). Relevant code: https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/likelihood.nim#L200-L211
With these in place the usage of the 2019 CDL data is done.
The background rate for the gold region comparing the 2014 CDL data with the 2019 CDL data then is shown in fig. 62
As can be seen the behavior of the background rate for the 2019 CDL data is somewhat smoother, while roughly the same background rate is recovered.
7.4. 2014 CDL Dataset description
The original calibration CDL file used by Christoph (only converted to H5 from the original ROOT file) is found at:
./../../CastData/ExternCode/TimepixAnalysis/resources/calibration-cdl.h5
and the X-ray reference file:
./../../CastData/ExternCode/TimepixAnalysis/resources/XrayReferenceDataSet.h5
Using the ./../../CastData/ExternCode/TimepixAnalysis/Analysis/ingrid/cdl_spectrum_creation.nim tool we can recreate this file from the raw data.
However, in order to do that, we need access to the raw data of the 2014 CDL runs and know which of those runs belongs to which target filter combination.
Fortunately, we have access to both the .xlsx file describing the different runs during the CDL data taking: ./../../CastData/ExternCode/TimepixAnalysis/resources/CDL-Apr14-D03-W0063-Runlist_stick.xlsx or https://github.com/Vindaar/TimepixAnalysis/blob/master/resources/CDL-Apr14-D03-W0063-Runlist_stick.xlsx.
By looking both at the "Comment" column and the "Run ok" column as well as the names of the runs, we can gleam first insights into which runs are used.
A different way to look at it (a good cross check) is to use Christoph's actual folders from his computer. A copy is found in: ./../../../../mnt/1TB/CAST/CDL-reference/ and each subdirectory for each target filter combination contains symbolic links to the used runs (from which we can determine the run number):
cd /mnt/1TB/CAST/CDL-reference for f in *kV; do ls $f/reco/*.root; done
Based on these two approaches the accepted runs were sorted by their target filter combination and live here: ./../../../../mnt/1TB/CAST/2014_15/CDL_Runs_raw/
cd /mnt/1TB/CAST/2014_15/CDL_Runs_raw #for f in *kV; do ls -lh $f; done tree -d -L 2
These directories can then be easily used to recreate the
calibration-cdl.h5
file and XrayReferenceFile.h5
.
Finally, in order to actually create the files using
cdl_spectrum_creation.nim
, we need a table that contains each run
number and the corresponding target filter kind, akin to the following
file for the 2019 CDL data:
https://github.com/Vindaar/TimepixAnalysis/blob/master/resources/cdl_runs_2019.org
Let's create such a file from the above mentioned directories. The
important thing is to use the exact same layout as the
cdl_runs_2019.org
file so that we don't have to change the parsing
depending on the year of CDL data.
UPDATE: ./../../CastData/ExternCode/TimepixAnalysis/resources/cdl_runs_2014.html
Indeed, Hendrik already created such a file and sent it to me. It now lives atThe code here will remain as a way to generate that file (although it's not actually done).
import os, sequtils, strutils, strformat import ingrid / cdl_spectrum_creation const path = "/mnt/1TB/CAST/2014_15/CDL_Runs_raw" proc readDir(path: string): seq[string] = ## reads a calibration-cdl-* directory and returns a sequence of correctly ## formatted lines for the resulting Org table for (pc, path) in walkDir(path): echo path var lines: seq[string] for (pc, path) in walkDir(path): case pc of pcDir: let dirName = extractFilename path if dirName.startsWith "calibration-cdl": lines.add readDir(path) else: discard
Now we need to generate a H5 file that contains all calibration runs, which we can use as a base.
So first let's create links for all runs:
cd /mnt/1TB/CAST/2014_15/CDL_Runs_raw mkdir all_cdl_runs cd all_cdl_runs # generate symbolc links to all runs for dir in ../calibration-cdl-apr2014-*; do for f in $dir/*; do ln -s $f `basename $f`; done; done
And now run through raw + reco:
raw_data_manipulation all_cdl_runs --runType xray --out calibration-cdl-apr2014_raw.h5 --ignoreRunList reconstruction calibration-cdl-apr2014_raw.h5 --out calibration-cdl-apr2014_reco.h5 reconstruction calibration-cdl-apr2014_reco.h5 --only_charge reconstruction calibration-cdl-apr2014_reco.h5 --only_gas_gain reconstruction calibration-cdl-apr2014_reco.h5 --only_energy_from_e
Now we're done with our input file for the CDL creation.
cdl_spectrum_creation calibration-cdl-apr2014_reco.h5 --cutcdl cdl_spectrum_creation calibration-cdl-apr2014_reco.h5 --genCdlFile --year=2014 cdl_spectrum_creation calibration-cdl-apr2014_reco.h5 --genRefFile --year=2014
And that's it.
7.5. Comment on confusion between CDL / Ref file & cuts
This section is simply a comment on the relation between the CDL data file, the X-ray reference file and the different cuts, because every time I don't look at this for a while I end up confused again.
Files:
- calibration CDL data file /
calibration-cdl.h5
/cdlFile
- Xray reference file /
XrayReferenceFile.h5
/refFile
Cuts:
- X-ray cleaning cuts /
getXrayCleaningCuts
, tab. 15 - CDL reference cuts /
getEnergyBinMinMaxVals201*
, tab. 14.
Usage in likelihood.nim
:
buildLogLHist
: receives bothcdlFile
andrefFile
. However:refFile
is only used to callcalcLikelihoodDataset
as a fallback in the case where thecdlFile
given does not yet havelogL
values computed (which only happens when the CDL file is first generated fromcdl_spectrum_generation.nim
). ThebuildLogLHist
procedure builds two sequences:- the logL values of all clusters of the CDL data for one target/filter combination which pass both sets of cuts mentioned above.
- the corresponding energy values of these clusters.
calcCutValueTab
: receives bothcdlFile
andrefFile
. Computes the cut values used for each target/filter combination (or once morphing is implemented for each energy). The procedure usesbuildLogLHist
to get all valid clusters (that pass both of the above mentioned cuts!) and computes the histogram of those values. These are then the logL distributions from which a cut value is computed by looking for the ε (default 80%) value in the CDF (cumulative distribution function).calcLogLikelihood
: receives bothcdlFile
andrefFile
. It computes the actual logL values of each cluster in the input file to which the logL cut is to be applied. CallscalcLikelihoodDatasets
internally, which actually uses therefFile
as well aswriteLogLDsetAttributes
writeLogLDsetAttributes
: takes bothcdlFile
andrefFile
and simply adds the names of usedcdlFile
andrefFile
to the input H5 file.calcLikelihoodDataset
: only takes therefFile
. Computes the logL value for each cluster in the input H5 file.calcLikelihoodForEvent
: takes indirectly therefFile
(as data fromcalcLikelihoodDataset
). Computes the logL value for each cluster explicitly.filterClustersByLogL
: takes bothcdlFile
andrefFile
. Performs the application of the logL cuts. Mainly callscalcCutValueTab
and uses it to perform the filtering (plus additional vetoes etc.)
All of this implies the following:
- The
refFile
is only used to compute thelogL
values for each cluster. That's what's meant by reference distribution. It only considers the CDL cuts, i.e. cuts to clean out unlikely to be X-rays from the set of CDL data by filtering to the peaks of the CDL data. - The
cdlFile
is used to compute the logL distributions and their cut values for each target/filter combination. It uses both sets of cuts. The logL distributions, which are used to determine the ε efficiency are from thecdlFile
!
File | Uses X-ray cleaning cuts | Uses CDL reference cuts | Purpose |
---|---|---|---|
refFile | false | true | used to compute the logL values of each cluster |
cdlFile | true | true | used to compute the logL distributions, which are then used |
to compute the cut values given a certain signal efficiency | |||
to be used to decide if a given input cluster is Xray like or not |
In a sense we have the following branching situation:
- CDL raw data:
- -> CDL cuts -> binning by pre determined bin edges -> X-ray reference spectra. Used to compute logL values of each cluster, because each spectrum (for each observable) is used to determine the likelihood value of each property.
-> CDL cuts + X-ray cleaning cuts -> gather logL values of all clusters passing these cuts (logL values are also computed using reference spectra above!) and bin by:
- 200 bins in (0, 30) (logL value)
to get the logL distributions. Look at CDF of logL distributions to determine cut values requiring a specific signal efficiency (by default 80%).
7.5.1. What does this imply for section 22 on CDL morphing?
It means the morphing we computed in cdlMorphing.nim
in practice
does not actually have to be applied to the studied distributions, but
rather to those with the CDL cuts applied!
This is a bit annoying to be honest.
To make this a bit more palateable, let's extract the buildLogLHist
procedure into its own module, so that we can easier check what the
data looks like in comparison to the distributions we looked at
before.
NOTE: Ok, my brain is starting to digest what this really implies. Namely it means that the interpolation has to be done in 2 stages.
- interpolate the X-ray reference spectra in the same as we have implemented for the CDL morphing and compute each clusters logL value based on that interpolation.
- Perform not an interpolation on the logL input variables (eccentricity, …) but on the final logL distributions.
In a sense one could do either of these independently. Number 2 seems
easier to implement, because it applies the interpolation logic to the
logL histograms in buildLogLHisto
directly and computes the cut
values from each interpolated distribution.
Then in another step we can later perform the interpolation of the
logL variable distributions as done while talking about the CDL
morphing in the first place. For that we have to modify
calcLikelihoodForEvent
(and parents of course) to not receive a
tuple[ecc, ldivRms, fracRms: Table[string: histTuple]]
but rather a
more abstract interpolator type that stores internally a number
(~1000) different, morphed distributions for each energy and picks the
correct one when asked in the call to logLikelihood
.
The thing that makes number 1 so annoying is that it means the logL
dataset not only for the actual datafiles, but also for the
calibration-cdl.h5
file need to be recomputed.
7.5.2. Applying interpolation to logL distributions
Before we apply any kind of interpolation, it seems important to
visualize what the logL distributions actually look like again.
See the discussion in sec. 22.5.
7.6. Explanation of CDL datasets for Klaus
This section contains an explanation I wrote for Klaus trying to clarify what the difference between all the different datasets and cuts is.
7.6.1. Explanation for Klaus: CDL data and reference spectra
One starts from the raw data taken in the CAST detector lab. After selecting the data runs that contain useful information we are left with what we will call "raw CDL data" in the following.
This raw CDL data is stored in a file called calibration-cdl.h5
, a
HDF5 file that went through the general TPA pipeline so that all
clusters are selected, geometric properties and the energy for each
cluster are computed.
From this file we compute the so called "X-ray reference spectra". These reference spectra define the likelihood reference distributions for each observable:
- eccentricity
- cluster length / transverse RMS
- fraction of pixels in a circle of radius 'transverse RMS' around the cluster center
These spectra are stored in the XrayReferenceFile.h5
file.
This file is generated as follows:
- take the
calibration-cdl.h5
file - apply the cuts of tab. 14 to filter clusters passing these cuts
- compute histograms of the remaining clusters according to predefined bin ranges and bin widths (based on Christoph's work)
Target | Filter | HV / \si{\kV} | Qcenter / \(e^-\) | Qsigma / \(e^-\) | length / mm | rmsT,min | rmsT,max |
---|---|---|---|---|---|---|---|
Cu | Ni | 15 | \num{6.63e5} | \num{7.12e4} | 7.0 | 0.1 | 1.1 |
Mn | Cr | 12 | \num{4.92e5} | \num{5.96e4} | 7.0 | 0.1 | 1.1 |
Ti | Ti | 9 | \num{4.38e5} | \num{6.26e4} | 7.0 | 0.1 | 1.1 |
Ag | Ag | 6 | \num{2.90e5} | \num{4.65e4} | 7.0 | 0.1 | 1.1 |
Al | Al | 4 | \num{1.34e5} | \num{2.33e4} | 7.0 | 0.1 | 1.1 |
Cu | EPIC | 2 | \num{7.76e4} | \num{2.87e4} | 7.0 | 0.1 | 1.1 |
Cu | EPIC | 0.9 | \num{4.17e4} | \num{1.42e4} | 7.0 | 0.1 | 1.1 |
C | EPIC | 0.6 | \num{ 0.0} | \num{1.31e4} | 6.0 |
These are the spectra we have looked at when talking about the CDL morphing.
This file however is not used to derive the actual logL distributions and therefore not to determine the cut values on said distributions.
Instead, to compute the logL distributions we take the
calibration-cdl.h5
file again.
This is now done as follows:
- make sure the
calibration-cdl.h5
file already has logL values for each cluster computed. If not, use theXrayReferenceFile.h5
to compute logL values for each cluster. - apply the cuts of tab. 14 to the clusters
(now we have selected the same clusters as contained in
XrayReferenceFile.h5
) - in addition apply the cuts of tab. 15 to further remove clusters that could be background events in the raw CDL data. Note that some of these cuts overlap with the previous cuts. Essentially it's a slightly stricter cut on the transverse RMS and an additional cut on the cluster eccentricity.
- gather the logL values of all remaining clusters
- compute a histogram given:
- 200 bins in the range (0, 30) of logL values
The resulting distribution is the logL
distribution that is then
used to compute a cut value for a specified signal efficiency by
scanning the CDF for the corresponding value.
Target | Filter | line | HV | length | rmsTmin | rmsTmax | eccentricity |
---|---|---|---|---|---|---|---|
Cu | Ni | \(\ce{Cu}\) \(\text{K}_{\alpha}\) | 15 | 0.1 | 1.0 | 1.3 | |
Mn | Cr | \(\ce{Mn}\) \(\text{K}_{\alpha}\) | 12 | 0.1 | 1.0 | 1.3 | |
Ti | Ti | \(\ce{Ti}\) \(\text{K}_{\alpha}\) | 9 | 0.1 | 1.0 | 1.3 | |
Ag | Ag | \(\ce{Ag}\) \(\text{L}_{\alpha}\) | 6 | 6.0 | 0.1 | 1.0 | 1.4 |
Al | Al | \(\ce{Al}\) \(\text{K}_{\alpha}\) | 4 | 0.1 | 1.1 | 2.0 | |
Cu | EPIC | \(\ce{Cu}\) \(\text{L}_{\alpha}\) | 2 | 0.1 | 1.1 | 2.0 | |
Cu | EPIC | \(\ce{O }\) \(\text{K}_{\alpha}\) | 0.9 | 0.1 | 1.1 | 2.0 | |
C | EPIC | \(\ce{C }\) \(\text{K}_{\alpha}\) | 0.6 | 6.0 | 0.1 | 1.1 |
These logL distributions are shown in fig. 63.
So in the end linear interpolation had to be implemented in 2 different places:
- between the different distributions of the reference spectra for all three logL variables
- between the different logL distributions
7.6.2. Aside: Fun bug
The following plot cost me a few hours of debugging:
7.7. Extraction of CDL data to CSV
For Tobi I wrote a mini script ./../../CastData/ExternCode/TimepixAnalysis/Tools/cdlH5ToCsv.nim, which extracts the CDL data (after the CDL cuts are applied, i.e. "cleaning cuts") and stores them in CSV files.
These are the datasets as they are created in
cdl_spectrum_creation.nim
in cutAndWrite
after:
let passIdx = cutOnProperties(h5f, grp, cut.cutTo, ("rmsTransverse", cut.minRms, cut.maxRms), ("length", 0.0, cut.maxLength), ("hits", cut.minPix, Inf), ("eccentricity", 0.0, cut.maxEccentricity))
8. FADC
For FADC info see the thesis.
FADC manual: https://archive.org/details/manualzilla-id-5646050/ and
8.1. Pedestal [/]
Initially the pedestal data was used from the single pedestal run we took before the first CAST data taking.
The below was initially written for the thesis.
[ ]
INSERT PLOTS OF COMPARISON OF OLD PEDESTAL AND NEW PEDESTAL!!!
8.2. Rise and fall times of data
Let's look at the rise and fall times of FADC data comparing the 55Fe data with background data to understand where one might put cuts. In sec. 6.1 we already looked at this years ago, but for the thesis we need new plots that are reproducible and verify the cuts we use make sense (hint: they don't).
The following is just a small script to generate plots comparing these.
import nimhdf5, ggplotnim import std / [strutils, os, sequtils, sets, strformat] import ingrid / [tos_helpers, ingrid_types] import ingrid / calibration / [calib_fitting, calib_plotting] import ingrid / calibration proc plotFallTimeRiseTime(df: DataFrame, suffix: string, riseTimeHigh: float) = ## Given a full run of FADC data, create the ## Note: it may be sensible to compute a truncated mean instead # local copy filtered to maximum allowed rise time let df = df.filter(f{`riseTime` <= riseTimeHigh}) proc plotDset(dset: string) = let dfCalib = df.filter(f{`Type` == "⁵⁵Fe"}) echo "============================== ", dset, " ==============================" echo "Percentiles:" echo "\t 1-th: ", dfCalib[dset, float].percentile(1) echo "\t 5-th: ", dfCalib[dset, float].percentile(5) echo "\t50-th: ", dfCalib[dset, float].percentile(50) echo "\t mean: ", dfCalib[dset, float].mean echo "\t95-th: ", dfCalib[dset, float].percentile(95) echo "\t99-th: ", dfCalib[dset, float].percentile(99) ggplot(df, aes(dset, fill = "Type")) + geom_histogram(position = "identity", bins = 100, hdKind = hdOutline, alpha = 0.7) + ggtitle(&"FADC signal {dset} in ⁵⁵Fe vs background data in $#" % suffix) + xlab(dset & " [ns]") + ggsave(&"Figs/statusAndProgress/FADC/fadc_{dset}_signal_vs_background_$#.pdf" % suffix) ggplot(df, aes(dset, fill = "Type")) + geom_density(normalize = true, alpha = 0.7, adjust = 2.0) + ggtitle(&"FADC signal {dset} in ⁵⁵Fe vs background data in $#" % suffix) + xlab(dset & " [ns]") + ggsave(&"Figs/statusAndProgress/FADC/fadc_{dset}_kde_signal_vs_background_$#.pdf" % suffix) plotDset("fallTime") plotDset("riseTime") when false: let dfG = df.group_by("runNumber").summarize(f{float: "riseTime" << truncMean(col("riseTime").toSeq1D, 0.05)}, f{float: "fallTime" << truncMean(col("fallTime").toSeq1D, 0.05)}) ggplot(dfG, aes(runNumber, riseTime, color = fallTime)) + geom_point() + ggtitle("Comparison of FADC signal rise times in ⁵⁵Fe data for all runs in $#" % suffix) + ggsave("Figs/statusAndProgress/FADC/fadc_mean_riseTime_$#.pdf" % suffix) ggplot(dfG, aes(runNumber, fallTime, color = riseTime)) + geom_point() + ggtitle("Comparison of FADC signal fall times in ⁵⁵Fe data for all runsin $#" % suffix) + ggsave("Figs/statusAndProgress/FADC/fadc_mean_fallTime_$#.pdf" % suffix) template toEDF*(data: seq[float], isCumSum = false): untyped = ## Computes the EDF of binned data var dataCdf = data if not isCumSum: seqmath.cumsum(dataCdf) let integral = dataCdf[^1] let baseline = min(data) # 0.0 dataCdf.mapIt((it - baseline) / (integral - baseline)) import numericalnim / interpolate import arraymancer proc plotROC(dfB, dfC: DataFrame, suffix: string) = # 1. compute cumulative sum from each type of data that is binned in the same way # 2. plot cumsum, (1 - cumsum) when false: proc toInterp(df: DataFrame): InterpolatorType[float] = let data = df["riseTime", float].toSeq1D.sorted let edf = toEdf(data) ggplot(toDf(data, edf), aes("data", "edf")) + geom_line() + ggsave("/tmp/test_edf.pdf") result = newLinear1D(data, edf) let interpS = toInterp(dfC) let interpB = toInterp(dfB) proc doit(df: DataFrame) = let data = df["riseTime", float] let xs = linspace(data.min, data.max, 1000) let kde = kde(data) proc eff(data: seq[float], val: float, isBackground: bool): float = let cutIdx = data.lowerBound(val) result = cutIdx.float / data.len.float if isBackground: result = 1.0 - result let dataB = dfB["riseTime", float].toSeq1D.sorted let dataC = dfC["riseTime", float].toSeq1D.sorted var xs = newSeq[float]() var ysC = newSeq[float]() var ysB = newSeq[float]() var ts = newSeq[string]() for i in 0 ..< 200: # rise time xs.add i.float ysC.add dataC.eff(i.float, isBackground = false) ysB.add dataB.eff(i.float, isBackground = true) let df = toDf(xs, ysC, ysB) ggplot(df, aes("ysC", "ysB")) + geom_line() + ggtitle("ROC curve of FADC rise time cut (only upper), ⁵⁵Fe vs. background in $#" % suffix) + xlab("Signal efficiency [%]") + ylab("Background suppression [%]") + ggsave("Figs/statusAndProgress/FADC/fadc_rise_time_roc_curve.pdf", width = 800, height = 480) let dfG = df.gather(["ysC", "ysB"], "ts", "ys") ggplot(dfG, aes("xs", "ys", color = "ts")) + geom_line() + xlab("Rise time [clock cycles]") + ylab("Signal efficiency / background suppression [%]") + ggsave("Figs/statusAndProgress/FADC/fadc_rise_time_efficiencies.pdf", width = 800, height = 480) proc read(fname, typ: string, eLow, eHigh: float): DataFrame = var h5f = H5open(fname, "r") let fileInfo = h5f.getFileInfo() var peakPos = newSeq[float]() result = newDataFrame() for run in fileInfo.runs: if recoBase() & $run / "fadc" notin h5f: continue # skip runs that were without FADC var df = h5f.readRunDsets( run, #chipDsets = some((chip: 3, dsets: @["eventNumber"])), # XXX: causes problems?? Removes some FADC data # but not due to events! fadcDsets = @["eventNumber", "baseline", "riseStart", "riseTime", "fallStop", "fallTime", "minvals", "argMinval"] ) # in calibration case filter to if typ == "⁵⁵Fe": let xrayRefCuts = getXrayCleaningCuts() let cut = xrayRefCuts["Mn-Cr-12kV"] let grp = h5f[(recoBase() & $run / "chip_3").grp_str] let passIdx = cutOnProperties( h5f, grp, crSilver, # try cutting to silver (toDset(igRmsTransverse), cut.minRms, cut.maxRms), (toDset(igEccentricity), 0.0, cut.maxEccentricity), (toDset(igLength), 0.0, cut.maxLength), (toDset(igHits), cut.minPix, Inf), (toDset(igEnergyFromCharge), eLow, eHigh) ) let dfChip = h5f.readRunDsets(run, chipDsets = some((chip: 3, dsets: @["eventNumber"]))) let allEvNums = dfChip["eventNumber", int] let evNums = passIdx.mapIt(allEvNums[it]).toSet df = df.filter(f{int: `eventNumber` in evNums}) df["runNumber"] = run result.add df result["Type"] = typ echo result proc main(back, calib: string, year: int, energyLow = 0.0, energyHigh = Inf, riseTimeHigh = Inf ) = let is2017 = year == 2017 let is2018 = year == 2018 if not is2017 and not is2018: raise newException(IOError, "The input file is neither clearly a 2017 nor 2018 calibration file!") let yearToRun = if is2017: 2 else: 3 let suffix = "Run-$#" % $yearToRun var df = newDataFrame() let dfC = read(calib, "⁵⁵Fe", energyLow, energyHigh) let dfB = read(back, "Background", energyLow, energyHigh) plotROC(dfB, dfC, suffix) df.add dfC df.add dfB plotFallTimeRiseTime(df, suffix, riseTimeHigh) when isMainModule: import cligen dispatch main
UPDATE: See the subsection below for updated plots.
When looking at these fall and rise time plots:
we can clearly see there is something like a "background" or an offset that is very flat under both the signal and background data (in run 2 and 3).
Let's see what this might be using plotData
looking at event
displays of clusters that pass the following requirements:
- X-ray cleaning cuts
- fall time < 400 (from there we clearly don't see anything that should be real in calibration data)
- energies around the escape peak (not strictly needed)
NOTE: This should not have been run with --chips 3
!
plotData --h5file ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --runType rtCalibration \ --chips 3 \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --eventDisplay -1 \ --cuts '("rmsTransverse", 0.1, 1.1)' \ --cuts '("eccentricity", 0.0, 1.3)' \ --cuts '("energyFromCharge", 2.5, 3.5)' \ --cuts '("fadc/fallTime", 0.0, 400.0)' \ --region crSilver \ --applyAllCuts \ --septemboard
the --septemboard
flag activates plotting of the full septemboard
layout with FADC data on the side.
See these events here:
By looking at them intently, we can easily recognize what the issue
is:
See for example fig.
looking at the fallStop
(not the fall time). It is at 820. What else
is at 820? The 0 register we set from the first few entries of the raw
data files!
Comparing this with other events in the file proves that this is indeed the reason. So time to fix the calculation of the rise and fall time by making it a bit more
import nimhdf5, ggplotnim import std / [strutils, os, sequtils] import ingrid / [tos_helpers, fadc_helpers, ingrid_types, fadc_analysis] proc stripPrefix(s, p: string): string = result = s result.removePrefix(p) proc plotIdx(df: DataFrame, fadcData: Tensor[float], idx: int) = let xmin = df["argMinval", int][idx] let xminY = df["minvals", float][idx] let xminlineX = @[xmin, xmin] # one point for x of min, max let fData = fadcData[idx, _].squeeze let xminlineY = linspace(fData.min, fData.max, 2) let riseStart = df["riseStart", int][idx] let fallStop = df["fallStop", int][idx] let riseStartX = @[riseStart, riseStart] let fallStopX = @[fallStop, fallStop] let baseline = df["baseline", float][idx] let baselineY = @[baseline, baseline] let df = toDf({ "x" : toSeq(0 ..< 2560), "baseline" : baseline, "data" : fData, "xminX" : xminlineX, "xminY" : xminlineY, "riseStart" : riseStartX, "fallStop" : fallStopX }) # Comparison has to be done by hand unfortunately let path = "/t/fadc_spectrum_baseline.pdf" ggplot(df, aes("x", "data")) + geom_line() + geom_point(color = color(0.1, 0.1, 0.1, 0.1)) + geom_line(aes = aes("x", "baseline"), color = "blue") + geom_line(data = df.head(2), aes = aes("xminX", "xminY"), color = "red") + geom_line(data = df.head(2), aes = aes("riseStart", "xminY"), color = "green") + geom_line(data = df.head(2), aes = aes("fallStop", "xminY"), color = "pink") + ggtitle("riseStart: " & $riseStart & ", fallStop: " & $fallStop) + ggsave(path) proc getFadcData(fadcRun: ProcessedFadcRun) = let ch0 = getCh0Indices() let fadc_ch0_indices = getCh0Indices() # we demand at least 4 dips, before we can consider an event as noisy n_dips = 4 # the percentile considered for the calculation of the minimum min_percentile = 0.95 numFiles = fadcRun.eventNumber.len var fData = ReconstructedFadcRun( fadc_data: newTensorUninit[float]([numFiles, 2560]), eventNumber: fadcRun.eventNumber, noisy: newSeq[int](numFiles), minVals: newSeq[float](numFiles) ) let pedestal = getPedestalRun(fadcRun) for i in 0 ..< fadcRun.eventNumber.len: let slice = fadcRun.rawFadcData[i, _].squeeze let data = slice.fadcFileToFadcData( pedestal, fadcRun.trigRecs[i], fadcRun.settings.postTrig, fadcRun.settings.bitMode14, fadc_ch0_indices ).data fData.fadc_data[i, _] = data.unsqueeze(axis = 0) fData.noisy[i] = data.isFadcFileNoisy(n_dips) fData.minVals[i] = data.calcMinOfPulse(min_percentile) let recoFadc = calcRiseAndFallTime( fData.fadcData, false ) let df = toDf({ "baseline" : recoFadc.baseline, "argMinval" : recoFadc.xMin.mapIt(it.float), "riseStart" : recoFadc.riseStart.mapIt(it.float), "fallStop" : recoFadc.fallStop.mapIt(it.float), "riseTime" : recoFadc.riseTime.mapIt(it.float), "fallTime" : recoFadc.fallTime.mapIt(it.float), "minvals" : fData.minvals }) for idx in 0 ..< df.len: plotIdx(df, fData.fadc_data, idx) sleep(1000) proc main(fname: string, runNumber: int) = var h5f = H5open(fname, "r") let fileInfo = h5f.getFileInfo() for run in fileInfo.runs: if run == runNumber: let fadcRun = h5f.readFadcFromH5(run) fadcRun.getFadcData() when isMainModule: import cligen dispatch main
Based on this we've now implemented the following changes:
- instead of median + 0.1 · max: truncated mean of 30-th to 95-th percentile
- instead of times to exact baseline, go to baseline - 2.5%
- do not compute threshold based on individual value, but on a moving average of window size 5
- Also: use all registers and do not set first two registers to 0!
These should fix the "offsets" seen in the rise/fall time histograms/kdes.
The actual spectra that come out of the code hasn't really changed in case it works (slightly more accurate baseline + rise/fall time not to baseline, but slightly below; but those are details), but the broken cases are now fixed.
An example event after the fixes is:
- EXPLANATION FOR FLAT BACKGROUND IN RISE / FALL TIME: The "dead" register causes our fall / rise time calculation to break! This leads to a 'background' of homogeneous rise / fall times -> THIS NEEDS TO BE FIXED FIRST!!
8.2.1. Updated look at rise/fall time data (signal vs background) after FADC fixes [/]
NOTE: The plots shown here are still not the final ones. More FADC
algorithm changes where done after, refer to
improved_rise_fall_algorithm
plots with a 10percent_top_offset
suffix and sections below, in particular
sec. 8.3.
The 10 percent top offset was deduced from this section: 8.2.2.1.6.
Let's recompile and rerun the
/tmp/fadc_rise_fall_signal_vs_background.nim
code.
We reran the whole analysis chain by doing:
cd $TPA/Analysis/ingrid ./runAnalysisChain -i ~/CastData/data \ --outpath ~/CastData/data \ --years 2017 --years 2018 \ --calib --back \ --reco
which regenerated all the files:
- ./../../CastData/data/CalibrationRuns2017_Reco.h5
- ./../../CastData/data/CalibrationRuns2018_Reco.h5
- ./../../CastData/data/DataRuns2017_Reco.h5
- ./../../CastData/data/DataRuns2018_Reco.h5
(the old ones have a suffix *_old_fadc_rise_fall_times
)
For completeness sake, let's reproduce the old and the new plots together, starting with the old:
cd /tmp/ mkdir OldPlots cd OldPlots /tmp/fadc_rise_fall_signal_vs_background -b ~/CastData/data/DataRuns2017_Reco_old_fadc_rise_fall_time.h5 \ -c ~/CastData/data/CalibrationRuns2017_Reco_old_fadc_rise_fall_time.h5 \ --year 2017 /tmp/fadc_rise_fall_signal_vs_background -b ~/CastData/data/DataRuns2018_Reco_old_fadc_rise_fall_time.h5 \ -c ~/CastData/data/CalibrationRuns2018_Reco_old_fadc_rise_fall_time.h5 \ --year 2018 pdfunite /tmp/OldPlots/Figs/statusAndProgress/FADC/*.pdf /tmp/old_fadc_plots_rise_fall_time_signal_background.pdf
And now the new ones:
cd /tmp/ mkdir NewPlots cd NewPlots /tmp/fadc_rise_fall_signal_vs_background -b ~/CastData/data/DataRuns2017_Reco.h5 \ -c ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --year 2017 /tmp/fadc_rise_fall_signal_vs_background -b ~/CastData/data/DataRuns2018_Reco.h5 \ -c ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --year 2018 pdfunite /tmp/NewPlots/Figs/statusAndProgress/FADC/*.pdf /tmp/new_fadc_plots_rise_fall_time_signal_background.pdf
Holy fuck are the differences big!
Copied over to:
(and the individual plots as well, the old ones have the
*_with_offset
suffix and the other ones no suffix).
Most impressive is the difference in the rise time
Rise time: vs.
and vs.
Fall time: vs.
and vs.
Two questions that come up immediately:
[X]
How does the Run-2 data split up by the different FADC settings? -> See sec. [BROKEN LINK: sec:fadc:rise_time_different_fadc_amp_settings] for more.[ ]
What are the peaks in the background data where we have super short rise times? I assume those are just our noise events? Verify!
The code above also produces data for the percentiles of the rise / fall time for the calibration data, which is useful to decide on the cut values.
For 2017:
============================== fallTime ============================== Percentiles: 1-th: 448.0 5-th: 491.0 95-th: 603.0 99-th: 623.0 ============================== riseTime ============================== Percentiles: 1-th: 82.0 5-th: 87.0 95-th: 134.0 99-th: 223.0
For 2018:
============================== fallTime ============================== Percentiles: 1-th: 503.0 5-th: 541.0 95-th: 630.0 99-th: 651.0 ============================== riseTime ============================== Percentiles: 1-th: 63.0 5-th: 67.0 95-th: 125.0 99-th: 213.0
Comparing these with the plots shows that the calculation didn't do anything too dumb.
So from these let's eye ball values of:
- rise time: 65 - 200
- fall time: 470 - 640
- Investigate peaks in FADC fall time < 200
The plots:
show small peaks in the background data at values below 200, more pronounced in the Run-2 data. My theory would be that these are noise events, but let's find out:
NOTE: This should not have been run with
--chips 3
!plotData --h5file ~/CastData/data/DataRuns2017_Reco.h5 \ --runType rtBackground \ --chips 3 \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --eventDisplay -1 \ --cuts '("fadc/fallTime", 0.0, 200.0)' \ --region crSilver \ --applyAllCuts \ --septemboard
some of these generated events are found here:
There are a mix of the following events presents in this type of data:
- Events that saturate the FADC completely and resulting in a very sharp run back to the baseline. This is somewhat expected and fine. (e.g. page 2)
- pure real noise events based on extremely noisy activity on the septemboard (e.g. page 1). This is pretty irrelevant, as these Septemboard events will never be interesting for anything.
- regular Septemboard events with low frequency noise on the FADC
(e.g. page 3). These are problematic and we must make sure not to
apply the FADC veto for these. Fortunately they seem to be detected
correctly by the
noisy
flag usually. Sometimes they are a bit higher frequency too (e.g. page 7, 12, …). - regular Septemboard events with very low frequency noise on the FADC, which does not trigger our noisy detection. (e.g. page 19, 39, …). These are very problematic and we need to fix the noise detection for these.
Takeaways from this: The noisy event detection actually works really well already! There are very few events in there that should be considered noisy, but are not!
- DONE Bug in properties on plots
[0/1]
Crap (fixed, see below):
page 23 (and page 42) is an interesting event of a very high energy detection on the septemboard with a mostly noise like signal in the FADC. HOWEVER the energy from charge, number of hits etc. properties DO NOT match what we see on the center chip! Not sure what's going on, but I assume we're dealing with a different cluster from the same event number? (Note: page e.g. 37 looks similar but has a reasonable energy! so not all events are problematic).
[X]
Investigate raw data by hand first. Event number 23943, index 29975. -> Takeaway 1: event indices in plotData titles don't make sense. They are larger than the event numbers?! Mixing of indices over all runs? Or what. -> Takeaway 2: The entries in the rows of the raw data that match the event number printed on the side does match the numbers printed on the plot! So it seems like the data seen does not match the numbers. -> Takeaway 3:
- Chips 0, 1, 4, 5, 6 have no data for event number 23943
Chips 2 (idx 6481)), 3 (idx 6719) have data for event number
However, Chip 2 also only has 150 hits at index 6481 (2.41 keV)
This means there is no data at this event number on the whole chip that can explain the data. Is
inner_join
at fault here? :/ Orgroup_by
? Uhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh I think I just figured it out. Ouch. It's just that the data does not match the event. The event "index" in the title is the event number nowadays! For some reason it gets screwed up for the annotations! The issue is likely that we simply walk through our cluster prop data index by index instead of making sure we get the index for the correct event number.
-> FIXED
: Fixed by filtering to the event number manually (makes sure we get correct event number instead of aligning indices, even if latter is more efficient). If there are more than 1 cluster on the event, the properties of the cluster with the lowest lnL value is printed and anumCluster
field is added that tells how many clusters found on the event.[ ]
VERIFY SEPTEMBOARD EVENTS USED ELSEWHERE ABOVE HAVE CORRECT MATCHES!
8.2.2. Behavior of rise and fall time against energy
import nimhdf5, ggplotnim import std / [strutils, os, sequtils, sets, strformat] import ingrid / [tos_helpers, ingrid_types] import ingrid / calibration / [calib_fitting, calib_plotting] import ingrid / calibration proc plotFallTimeRiseTime(df: DataFrame, suffix: string, isCdl: bool) = ## Given a full run of FADC data, create the ## Note: it may be sensible to compute a truncated mean instead proc plotDset(dset: string) = for (tup, subDf) in groups(group_by(df, "Type")): echo "============================== ", dset, " ==============================" echo "Type: ", tup echo "Percentiles:" echo "\t 1-th: ", subDf[dset, float].percentile(1) echo "\t 5-th: ", subDf[dset, float].percentile(5) echo "\t50-th: ", subDf[dset, float].percentile(50) echo "\t mean: ", subDf[dset, float].mean echo "\t80-th: ", subDf[dset, float].percentile(80) echo "\t95-th: ", subDf[dset, float].percentile(95) echo "\t99-th: ", subDf[dset, float].percentile(99) df.writeCsv("/tmp/fadc_data_$#.csv" % suffix) #let df = df.filter(f{`Type` == "Cu-Ni-15kV"}) ggplot(df, aes(dset, fill = "Type")) + geom_histogram(position = "identity", bins = 100, hdKind = hdOutline, alpha = 0.7) + ggtitle(&"Comparison of FADC signal {dset} in ⁵⁵Fe vs background data in $#" % suffix) + ggsave(&"Figs/statusAndProgress/FADC/fadc_{dset}_energy_dep_$#.pdf" % suffix) ggplot(df, aes(dset, fill = "Type")) + geom_density(normalize = true, alpha = 0.7, adjust = 2.0) + ggtitle(&"Comparison of FADC signal {dset} in ⁵⁵Fe vs background data in $#" % suffix) + ggsave(&"Figs/statusAndProgress/FADC/fadc_{dset}_kde_energy_dep_$#.pdf" % suffix) let df = df.filter(f{`riseTime` < 200}) ggplot(df, aes(dset, fill = "Type")) + geom_histogram(position = "identity", bins = 100, hdKind = hdOutline, alpha = 0.7) + ggtitle(&"Comparison of FADC signal {dset} in ⁵⁵Fe vs background data in $#" % suffix) + ggsave(&"Figs/statusAndProgress/FADC/fadc_{dset}_energy_dep_less_200_rise_$#.pdf" % suffix) ggplot(df, aes(dset, fill = "Type")) + geom_density(normalize = true, alpha = 0.7, adjust = 2.0) + ggtitle(&"Comparison of FADC signal {dset} in ⁵⁵Fe vs background data in $#" % suffix) + ggsave(&"Figs/statusAndProgress/FADC/fadc_{dset}_kde_energy_dep_less_200_rise_$#.pdf" % suffix) if isCdl: let xrayRef = getXrayRefTable() var labelOrder = initTable[Value, int]() for idx, el in xrayRef: labelOrder[%~ el] = idx ggplot(df, aes(dset, fill = "Type")) + ggridges("Type", overlap = 1.5, labelOrder = labelOrder) + geom_density(normalize = true, alpha = 0.7, adjust = 2.0, color = "black") + ggtitle(&"Comparison of FADC signal {dset} in ⁵⁵Fe vs background data in $#" % suffix) + ggsave(&"Figs/statusAndProgress/FADC/fadc_{dset}_ridgeline_kde_energy_dep_less_200_rise_$#.pdf" % suffix) ggplot(df, aes(dset, fill = "Settings")) + geom_density(normalize = true, alpha = 0.7, adjust = 2.0, color = "black") + ggtitle(dset & " of different FADC settings used") + ggsave(&"Figs/statusAndProgress/FADC/fadc_{dset}_kde_different_fadc_ampb_settings_$#.pdf" % suffix) ggplot(df, aes(dset, fill = factor("runNumber"))) + geom_density(normalize = true, alpha = 0.7, adjust = 2.0, color = "black") + ggtitle(dset & " of different runs") + ggsave(&"Figs/statusAndProgress/FADC/fadc_{dset}_kde_different_runs_$#.pdf" % suffix) plotDset("fallTime") plotDset("riseTime") proc read(fname, typ: string, eLow, eHigh: float, isCdl: bool): DataFrame = var h5f = H5open(fname, "r") let fileInfo = h5f.getFileInfo() var peakPos = newSeq[float]() result = newDataFrame() for run in fileInfo.runs: if recoBase() & $run / "fadc" notin h5f: continue # skip runs that were without FADC var df = h5f.readRunDsets( run, fadcDsets = @["eventNumber", "baseline", "riseStart", "riseTime", "fallStop", "fallTime", "minvals", "noisy", "argMinval"] ) let xrayRefCuts = getXrayCleaningCuts() let runGrp = h5f[(recoBase() & $run).grp_str] let tfKind = if not isCdl: tfMnCr12 else: runGrp.attrs["tfKind", string].parseEnum[:TargetFilterKind]() let cut = xrayRefCuts[$tfKind] let grp = h5f[(recoBase() & $run / "chip_3").grp_str] let passIdx = cutOnProperties( h5f, grp, crSilver, # try cutting to silver (toDset(igRmsTransverse), cut.minRms, cut.maxRms), (toDset(igEccentricity), 0.0, cut.maxEccentricity), (toDset(igLength), 0.0, cut.maxLength), (toDset(igHits), cut.minPix, Inf), (toDset(igEnergyFromCharge), eLow, eHigh) ) let dfChip = h5f.readRunDsets(run, chipDsets = some((chip: 3, dsets: @["eventNumber"]))) let allEvNums = dfChip["eventNumber", int] let evNums = passIdx.mapIt(allEvNums[it]).toSet # filter to allowed events & remove any noisy events df = df.filter(f{int: `eventNumber` in evNums and `noisy`.int < 1}) df["runNumber"] = run if isCdl: df["Type"] = $tfKind df["Settings"] = "Setting " & $(@[80, 101, 121].lowerBound(run)) result.add df if not isCdl: result["Type"] = typ echo result proc main(fname: string, year: int, energyLow = 0.0, energyHigh = Inf, isCdl = false) = if not isCdl: var df = newDataFrame() df.add read(fname, "escape", 2.5, 3.5, isCdl = false) df.add read(fname, "photo", 5.5, 6.5, isCdl = false) let is2017 = year == 2017 let is2018 = year == 2018 if not is2017 and not is2018: raise newException(IOError, "The input file is neither clearly a 2017 nor 2018 calibration file!") let yearToRun = if is2017: 2 else: 3 let suffix = "run$#" % $yearToRun plotFallTimeRiseTime(df, suffix, isCdl) else: let df = read(fname, "", 0.0, Inf, isCdl = true) plotFallTimeRiseTime(df, "CDL", isCdl) when isMainModule: import cligen dispatch main
ntangle ~/org/Doc/StatusAndProgress.org && nim c -d:danger /t/fadc_rise_fall_energy_dep.nim ./fadc_rise_fall_energy_dep -f ~/CastData/data/CalibrationRuns2017_Reco.h5 --year 2017
Output for 2017:
============================== fallTime ============================== Type: @[("Type", (kind: VString, str: "escape"))] Percentiles: 1-th: 406.49 5-th: 476.0 50-th: 563.0 mean: 559.6049042145594 95-th: 624.0 99-th: 660.0 ============================== fallTime ============================== Type: @[("Type", (kind: VString, str: "photo"))] Percentiles: 1-th: 462.0 5-th: 498.0 50-th: 567.0 mean: 561.2259466025087 95-th: 601.0 99-th: 616.0 ============================== riseTime ============================== Type: @[("Type", (kind: VString, str: "escape"))] Percentiles: 1-th: 78.0 5-th: 84.0 50-th: 103.0 mean: 114.0039846743295 95-th: 177.0 99-th: 340.5100000000002 ============================== riseTime ============================== Type: @[("Type", (kind: VString, str: "photo"))] Percentiles: 1-th: 83.0 5-th: 89.0 50-th: 104.0 mean: 107.4761626684731 95-th: 130.0 99-th: 196.0
Output for 2018:
./fadc_rise_fall_energy_dep -f ~/CastData/data/CalibrationRuns2018_Reco.h5 --year 2018
============================== fallTime ============================== Type: @[("Type", (kind: VString, str: "escape"))] Percentiles: 1-th: 456.0 5-th: 512.0 50-th: 585.0 mean: 582.0466121605112 95-th: 640.0 99-th: 677.6000000000004 ============================== fallTime ============================== Type: @[("Type", (kind: VString, str: "photo"))] Percentiles: 1-th: 515.0 5-th: 548.0 50-th: 594.0 mean: 592.7100718941074 95-th: 629.0 99-th: 647.0 ============================== riseTime ============================== Type: @[("Type", (kind: VString, str: "escape"))] Percentiles: 1-th: 60.0 5-th: 66.0 50-th: 86.0 mean: 96.70284747674091 95-th: 160.0 99-th: 309.2000000000007 ============================== riseTime ============================== Type: @[("Type", (kind: VString, str: "photo"))] Percentiles: 1-th: 63.0 5-th: 68.0 50-th: 84.0 mean: 88.30370221057582 95-th: 118.0 99-th: 182.0
These values provide the reference to the estimation we will perform next.
- Looking at the CDL data rise / fall times
Time to look at the rise and fall time of the CDL data. We've added a filter for the events to not be noisy events (sec. 8.2.2.1.1).
./fadc_rise_fall_energy_dep -f ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --year 2019 \ --isCdl
(note that the year is irrelevant here)
============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Ag-Ag-6kV"))] Percentiles: 1-th: 548.0 5-th: 571.0 50-th: 612.0 mean: 610.8296337402886 80-th: 631.0 95-th: 647.0 99-th: 660.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Al-Al-4kV"))] Percentiles: 1-th: 506.53 5-th: 538.0 50-th: 602.0 mean: 598.7740798747063 80-th: 629.0 95-th: 654.0 99-th: 672.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "C-EPIC-0.6kV"))] Percentiles: 1-th: 304.0 5-th: 357.0 50-th: 519.0 mean: 510.6200390370852 80-th: 582.0 95-th: 630.0 99-th: 663.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Cu-EPIC-0.9kV"))] Percentiles: 1-th: 365.35 5-th: 445.7 50-th: 556.0 mean: 549.2081310679612 80-th: 601.0 95-th: 637.0 99-th: 670.0599999999999============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Cu-EPIC-2kV"))] Percentiles: 1-th: 433.62 5-th: 487.0 50-th: 581.0 mean: 575.539179861957 80-th: 614.0 95-th: 651.0 99-th: 671.3800000000001============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Cu-Ni-15kV"))] Percentiles: 1-th: 539.0 5-th: 575.0 50-th: 606.0 mean: 604.7243749086124 80-th: 618.0 95-th: 629.0 99-th: 640.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Mn-Cr-12kV"))] Percentiles: 1-th: 540.0 5-th: 568.0 50-th: 604.0 mean: 602.9526100904054 80-th: 620.0 95-th: 634.0 99-th: 646.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Ti-Ti-9kV"))] Percentiles: 1-th: 551.0 5-th: 575.0 50-th: 611.0 mean: 610.1495433789954 80-th: 627.0 95-th: 640.0 99-th: 655.0============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Ag-Ag-6kV"))] Percentiles: 1-th: 61.0 5-th: 66.0 50-th: 84.0 mean: 84.54994450610432 80-th: 93.0 95-th: 105.0 99-th: 119.0============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Al-Al-4kV"))] Percentiles: 1-th: 63.53 5-th: 70.0 50-th: 87.0 mean: 91.58535630383712 80-th: 103.0 95-th: 123.0 99-th: 146.0============================
riseTime============================
Type: @[("Type", (kind: VString, str: "C-EPIC-0.6kV"))] Percentiles: 1-th: 57.0 5-th: 63.0 50-th: 89.0 mean: 97.01626545217957 80-th: 113.0 95-th: 149.0 99-th: 184.6400000000001============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Cu-EPIC-0.9kV"))] Percentiles: 1-th: 59.0 5-th: 67.0 50-th: 89.0 mean: 96.92839805825243 80-th: 110.0 95-th: 138.0 99-th: 182.53============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Cu-EPIC-2kV"))] Percentiles: 1-th: 63.0 5-th: 71.0 50-th: 90.0 mean: 95.50669914738124 80-th: 109.0 95-th: 132.0 99-th: 166.3800000000001============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Cu-Ni-15kV"))] Percentiles: 1-th: 61.0 5-th: 65.0 50-th: 82.0 mean: 84.01476824097091 80-th: 90.0 95-th: 99.0 99-th: 206.2399999999998============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Mn-Cr-12kV"))] Percentiles: 1-th: 61.0 5-th: 65.0 50-th: 81.0 mean: 83.31525226013414 80-th: 89.0 95-th: 98.0 99-th: 185.4300000000003============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Ti-Ti-9kV"))] Percentiles: 1-th: 63.0 5-th: 69.0 50-th: 85.0 mean: 87.13078930202218 80-th: 93.0 95-th: 105.0 99-th: 153.6899999999996we copy over the CSV file generated by the above command from
/tmp/fadc_data_CDL.csv
to ./../resources/FADC_rise_fall_times_CDL_data.csv so that we can plot the positions separately in sec. 8.2.2.1.4.which produces the following plots:
- Look at Cu-EPIC-0.9kV events between rise 40-60
The runs for this target/filter kind are: 339, 340
Let's plot those events: NOTE: This should not have been run with
--chips 3
!plotData --h5file ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --runType rtCalibration \ --chips 3 \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --cuts '("fadc/riseTime", 40, 60)' \ --applyAllCuts \ --runs 339 --runs 340 \ --eventDisplay \ --septemboard
So these events are essentially all just noise events! Which is a good point to add a
noisy
filter to the rise time plot!Considering how the rise times change with energy, it might after all be a good idea to have an energy dependent cut? Surprising because in principle we don't expect an energy dependence, but rather a dependence on absorption length! So AgAg should be less wide than TiTi!
- Look at C-EPIC-0.6kV rise time contribution in range: 110 - 130
Similar to the above case where we discovered the contribution of the noisy events in the data, let's now look at the contributions visible in the range 110 to 130 in the rise time in plot:
The runs for the C-EPIC 0.6kV dataset are: 342, 343
Generate the plots: NOTE: This should not have been run with
--chips 3
!plotData --h5file ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --runType rtCalibration \ --chips 3 \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --cuts '("fadc/riseTime", 110, 130)' \ --applyAllCuts \ --runs 342 --runs 343 \ --eventDisplay \ --septemboard
which are found at:
Looking at them reveals two important aspects:
- there are quite a lot of double events where the signal is made significantly longer by a second X-ray, explaining the longer rise time in cases where the minimum shifts towards the right.
- The data was taken with an extremely high amplification and thus
there is significantly more noise on the baseline. In many cases
then what happens is that the signal is randomly a bit below the
baseline and the
riseStart
appears a bit earlier, extending the distance to the minimum.
Combined this explains that the events visible there are mainly a kind of artifact, however not necessarily one we would be able to "deal with". Double hits in real data of course can be neglected, but the variations causing randomly longer rise times not.
However, it is important to realize that this case is not in any way practical for the CAST data, because we do not have an FADC trigger at those energies! Our trigger in the lowest of cases was at ~1.5 keV and later even closer to 2.2 keV. And we didn't change the gain (outside the specific cases where we adjusted due to noise).
As such we can ignore the contribution of that second "bump" and essentially only look at the "main peak"!
- Initial look at rise / fall times with weird
Next up we modified the code above to also work with the CDL data & split each run according to its target/filter kind. In
cutOnProperties
we currently only use the X-ray cleaning cuts (which may not be ideal as we will see):./fadc_rise_fall_energy_dep -f ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 --year 2019 --isCdl
which generated:
with the following percentile outputs:
============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Ag-Ag-6kV"))] Percentiles: 1-th: 378.05 5-th: 610.0 50-th: 656.0 mean: 649.087680355161 80-th: 674.0 95-th: 693.0 99-th: 707.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Al-Al-4kV"))] Percentiles: 1-th: 141.06 5-th: 510.9 50-th: 632.0 mean: 614.8747063429914 80-th: 663.0 95-th: 690.0 99-th: 714.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "C-EPIC-0.6kV"))] Percentiles: 1-th: 23.0 5-th: 26.3 50-th: 515.0 mean: 459.5428914217156 80-th: 595.0 95-th: 653.7 99-th: 687.3399999999999============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Cu-EPIC-0.9kV"))] Percentiles: 1-th: 22.0 5-th: 23.0 50-th: 541.0 mean: 431.9965601965602 80-th: 608.0 95-th: 658.0 99-th: 692.6600000000001============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Cu-EPIC-2kV"))] Percentiles: 1-th: 23.79 5-th: 361.0 50-th: 608.0 mean: 583.2463709677419 80-th: 650.0 95-th: 684.0 99-th: 711.21============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Cu-Ni-15kV"))] Percentiles: 1-th: 367.28 5-th: 626.0 50-th: 656.0 mean: 649.8090364088317 80-th: 667.0 95-th: 679.0 99-th: 691.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Mn-Cr-12kV"))] Percentiles: 1-th: 520.5699999999999 5-th: 614.0 50-th: 652.0 mean: 646.8802857976086 80-th: 667.0 95-th: 682.0 99-th: 694.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Ti-Ti-9kV"))] Percentiles: 1-th: 438.62 5-th: 615.0 50-th: 654.0 mean: 649.1258969341161 80-th: 669.0 95-th: 685.0 99-th: 700.0============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Ag-Ag-6kV"))] Percentiles: 1-th: 67.0 5-th: 77.0 50-th: 110.0 mean: 126.059748427673 80-th: 151.0 95-th: 234.0 99-th: 326.0============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Al-Al-4kV"))] Percentiles: 1-th: 65.53 5-th: 77.0 50-th: 102.5 mean: 111.3120595144871 80-th: 130.0 95-th: 179.0 99-th: 244.2299999999982============================
riseTime============================
Type: @[("Type", (kind: VString, str: "C-EPIC-0.6kV"))] Percentiles: 1-th: 12.66 5-th: 62.3 50-th: 92.0 mean: 100.3239352129574 80-th: 121.0 95-th: 157.7 99-th: 204.6799999999998============================
riseTime============================
Type: @[("Type", (kind: VString, str: "C-EPIC-0.6kV"))] Percentiles: 1-th: 12.66 5-th: 62.3 50-th: 92.0 mean: 100.3239352129574 80-th: 121.0 95-th: 157.7 99-th: 204.6799999999998============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Cu-EPIC-0.9kV"))] Percentiles: 1-th: 43.34 5-th: 69.7 50-th: 92.0 mean: 102.62457002457 80-th: 115.0 95-th: 159.0 99-th: 234.6400000000003============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Cu-EPIC-2kV"))] Percentiles: 1-th: 52.79 5-th: 74.0 50-th: 104.0 mean: 109.2959677419355 80-th: 131.0 95-th: 175.0 99-th: 224.4200000000001============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Cu-Ni-15kV"))] Percentiles: 1-th: 68.0 5-th: 79.0 50-th: 146.0 mean: 216.1468050884632 80-th: 374.0 95-th: 516.1000000000004 99-th: 600.0============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Mn-Cr-12kV"))] Percentiles: 1-th: 67.0 5-th: 77.0 50-th: 109.0 mean: 125.1246719160105 80-th: 147.0 95-th: 230.0 99-th: 337.8600000000006============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Ti-Ti-9kV"))] Percentiles: 1-th: 70.0 5-th: 81.0 50-th: 114.0 mean: 143.5371819960861 80-th: 167.0 95-th: 324.4499999999998 99-th: 549.3799999999992where we can mainly see that the 95-th percentile of the data is actually quite high in many cases (e.g. MnCr12kV is still "somewhat fine" at 230 for 95, but CuNi15kV is 516 and TiTi9 is 324!). Looking at the distributions of the rise times we see an obvious problem, namely rise times on the order of 350-500 in the CuNi15kV dataset! Question is what is that? Others also have quite a long tail. -> These were just an artifact of our old crappy way to compute rise times, fall times and baselines!
Let's plot the event displays of those CuNi events that are in that latter blob.I couldn't runplotData
as CDL data wasn't ran with modernreconstruction
for FADC yet. After rerunning that, these disappeared! The runs for CuNi15 are: 319, 320, 345 (rmsTransverse of CuNi dataset is interesting. Essentially just a linear increase up to1mm! [[file:~/org/Figs/statusAndProgress/FADC/old_rise_fall_algorithm/CDL_riseTime_fallTime/onlyCleaningCuts/rmsTransverse_run319 320 345_chip3_0.03_binSize_binRange-0.0_6.0_region_crSilver_rmsTransverse_0.1_1.0_eccentricity_0.0_1.3_toaLength_-0.0_20.0_applyAll_true.pdf]] run below with ~--ingrid
and onlyrmsTransverse
+eccentricity
cut):Important note: As of right now the CDL data still suffers from the FADC 0, 1 register = 0 bug! This will partially explain some "background" in the rise/fall times. UPDATE: Uhh, I reran the
--only_fadc
option ofreconstruction
on the CDL H5 file and having done that the weird behavior of the additional peak at > 350 is completely gone. What did we fix in there again?- rise / fall time not to baseline, but to offset below
- based on moving average instead of single value
- different way to calculate baseline based on truncated mean
- Rise time and fall time plots of percentile values
With the file ./../resources/FADC_rise_fall_times_CDL_data.csv we can generate plots of the percentiles of each target/filter kind to have an idea where a cutoff for that kind of energy and absorption length might be:
import ggplotnim, xrayAttenuation import arraymancer except readCsv import std / strutils import ingrid / tos_helpers proc absLength(E: keV): float = let ar = Argon.init() let ρ_Ar = density(1050.mbar.to(Pascal), 293.K, ar.molarMass) result = absorptionLength(E, numberDensity(ρ_Ar, ar.molarMass), ar.f2eval(E).float).float let df = readCsv("/home/basti/org/resources/FADC_rise_fall_times_CDL_data.csv") var dfP = newDataFrame() let dset = "riseTime" let lineEnergies = getXrayFluorescenceLines() let invTab = getInverseXrayRefTable() for (tup, subDf) in groups(group_by(df, "Type")): let data = subDf[dset, float] var percs = newSeq[float]() var percName = newSeq[string]() proc percentiles(percs: var seq[float], percName: var seq[string], name: string, val: int) = percName.add name percs.add data.percentile(val) percs.percentiles(percName, "1-th", 1) percs.percentiles(percName, "5-th", 5) percs.percentiles(percName, "50-th", 50) percName.add "mean" percs.add data.mean percName.add "MPV" let kdeData = kde(data) let xs = linspace(min(data), max(data), 1000) percs.add(xs[kdeData.argmax(0)[0]]) percs.percentiles(percName, "80-th", 80) percs.percentiles(percName, "95-th", 95) percs.percentiles(percName, "99-th", 99) let typ = tup[0][1].toStr let E = lineEnergies[invTab[typ]].keV let absLength = absLength(E) dfP.add toDf({"Value" : percs, "Percentile" : percName, "Type" : typ, "Energy" : E.float, "λ" : absLength}) ggplot(dfP, aes("Type", "Value", color = "Percentile")) + geom_point() + ggsave("/tmp/fadc_percentiles_by_tfkind.pdf") proc filterPlot(to: string) = let dfF = dfP.filter(f{`Percentile` == to}) let title = if to == "mean": to else: to & " percentile" ggplot(dfF, aes("λ", "Value", color = "Type")) + geom_point() + ggtitle("$# of FADC rise time vs absorption length λ" % title) + ggsave("/tmp/fadc_$#_vs_absLength_by_tfkind.pdf" % to) filterPlot("95-th") filterPlot("80-th") filterPlot("mean") filterPlot("MPV")
- CDL rise / fall times after FADC algorithm updates
Let's apply that to the CDL data, plot some events with baseline, rise / fall lines and then look at distributions.
reconstruction -i ~/CastData/data/DataRuns2018_Reco.h5 --only_fadc
and plot some events:
plotData --h5file ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --runType rtCalibration \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --eventDisplay --septemboard
WAIT The below only implies something about our calculation of the minimum value of the FADC data (i.e. the
minvals
dataset) as we use that to draw the lines to! -> Fixed this in the plotting. However, another issue appeared: The lines for start and stop were exactly the same! ->findThresholdValue
now returns the start and stop parameters. -> looks much more reasonable now.New ridge line plots here we come:
./fadc_rise_fall_energy_dep -f ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --year 2019 \ --isCdl
============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Ag-Ag-6kV"))] Percentiles: 1-th: 529.0 5-th: 553.0 50-th: 595.0 mean: 592.8311135775065 80-th: 612.0 95-th: 629.0 99-th: 643.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Al-Al-4kV"))] Percentiles: 1-th: 493.53 5-th: 522.0 50-th: 585.0 mean: 583.2071260767424 80-th: 612.0 95-th: 638.3499999999999 99-th: 658.4699999999998============================
fallTime============================
Type: @[("Type", (kind: VString, str: "C-EPIC-0.6kV"))] Percentiles: 1-th: 296.36 5-th: 349.8 50-th: 509.0 mean: 500.733246584255 80-th: 573.0 95-th: 620.0 99-th: 653.6400000000001============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Cu-EPIC-0.9kV"))] Percentiles: 1-th: 354.94 5-th: 433.0 50-th: 544.0 mean: 537.0983009708738 80-th: 587.0 95-th: 627.0 99-th: 658.53============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Cu-EPIC-2kV"))] Percentiles: 1-th: 424.0 5-th: 476.1 50-th: 568.0 mean: 561.8043036946813 80-th: 601.0 95-th: 633.0 99-th: 656.3800000000001============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Cu-Ni-15kV"))] Percentiles: 1-th: 524.0 5-th: 555.0 50-th: 588.0 mean: 586.4952478432519 80-th: 600.0 95-th: 611.0 99-th: 622.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Mn-Cr-12kV"))] Percentiles: 1-th: 524.0 5-th: 551.0 50-th: 586.0 mean: 585.2143482064741 80-th: 602.0 95-th: 617.0 99-th: 628.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Ti-Ti-9kV"))] Percentiles: 1-th: 532.0 5-th: 556.0 50-th: 594.0 mean: 592.3918786692759 80-th: 608.0 95-th: 623.0 99-th: 639.0============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Ag-Ag-6kV"))] Percentiles: 1-th: 50.0 5-th: 54.0 50-th: 70.0 mean: 70.35627081021087 80-th: 78.0 95-th: 88.0 99-th: 103.0============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Al-Al-4kV"))] Percentiles: 1-th: 53.0 5-th: 59.0 50-th: 73.0 mean: 77.530931871574 80-th: 87.0 95-th: 105.0 99-th: 128.4699999999998============================
riseTime============================
Type: @[("Type", (kind: VString, str: "C-EPIC-0.6kV"))] Percentiles: 1-th: 48.0 5-th: 54.0 50-th: 78.0 mean: 86.26350032530904 80-th: 102.8 95-th: 134.4000000000001 99-th: 175.6400000000001============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Cu-EPIC-0.9kV"))] Percentiles: 1-th: 51.0 5-th: 57.0 50-th: 78.0 mean: 84.87135922330097 80-th: 99.0 95-th: 127.0 99-th: 170.0599999999999============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Cu-EPIC-2kV"))] Percentiles: 1-th: 52.0 5-th: 58.0 50-th: 77.0 mean: 82.13560698335364 80-th: 96.0 95-th: 120.0 99-th: 152.3800000000001============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Cu-Ni-15kV"))] Percentiles: 1-th: 50.0 5-th: 53.0 50-th: 68.0 mean: 70.41541160988449 80-th: 75.0 95-th: 83.0 99-th: 186.1999999999989============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Mn-Cr-12kV"))] Percentiles: 1-th: 50.0 5-th: 53.0 50-th: 67.0 mean: 69.50597841936424 80-th: 74.0 95-th: 81.0 99-th: 171.4300000000003============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Ti-Ti-9kV"))] Percentiles: 1-th: 52.0 5-th: 57.0 50-th: 71.0 mean: 73.02185257664709 80-th: 77.0 95-th: 89.0 99-th: 139.0699999999988and for Run-2:
./fadc_rise_fall_energy_dep -f ~/CastData/data/Calibration2017_Runs.h5 \ --year 2017
============================
fallTime============================
Type: @[("Type", (kind: VString, str: "escape"))] Percentiles: 1-th: 390.0 5-th: 461.0 50-th: 548.0 mean: 543.7853639846743 80-th: 577.0 95-th: 607.0 99-th: 644.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "photo"))] Percentiles: 1-th: 449.0 5-th: 483.0 50-th: 550.0 mean: 544.8517320314872 80-th: 568.0 95-th: 584.0 99-th: 599.0============================
riseTime============================
Type: @[("Type", (kind: VString, str: "escape"))] Percentiles: 1-th: 66.0 5-th: 71.0 50-th: 88.0 mean: 99.69295019157089 80-th: 105.0 95-th: 161.0 99-th: 328.5100000000002============================
riseTime============================
Type: @[("Type", (kind: VString, str: "photo"))] Percentiles: 1-th: 71.0 5-th: 75.0 50-th: 89.0 mean: 92.85532923781271 80-th: 98.0 95-th: 114.0 99-th: 181.0and Run-3:
./fadc_rise_fall_energy_dep -f ~/CastData/data/Calibration2018_Runs.h5 \ --year 2018
============================
fallTime============================
Type: @[("Type", (kind: VString, str: "escape"))] Percentiles: 1-th: 443.0 5-th: 498.0 50-th: 571.0 mean: 567.8352288748943 80-th: 599.0 95-th: 625.0 99-th: 664.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "photo"))] Percentiles: 1-th: 501.0 5-th: 533.0 50-th: 580.0 mean: 578.2391351089849 80-th: 597.0 95-th: 615.0 99-th: 632.0============================
riseTime============================
Type: @[("Type", (kind: VString, str: "escape"))] Percentiles: 1-th: 50.0 5-th: 55.0 50-th: 73.0 mean: 84.06936742175016 80-th: 91.0 95-th: 145.0 99-th: 298.0============================
riseTime============================
Type: @[("Type", (kind: VString, str: "photo"))] Percentiles: 1-th: 53.0 5-th: 57.0 50-th: 72.0 mean: 75.88621764721692 80-th: 81.0 95-th: 105.0 99-th: 168.8699999999953which yield the following plots of interest (all others are found in the path of these):
Comparing them directly with the equivalent plots in ./../Figs/statusAndProgress/FADC/old_rise_fall_algorithm/ shows that the biggest change is simply that the rise times have become a bit smaller, as one might expect.
Upon closer inspection in particular in the CDL data however, it seems like some of the spectra become a tad narrower, losing a part of the additional hump.
In the signal / background case it's hard to say. There is certainly a change, but unclear if that is an improvement in separation.
- Investigation of
riseTime
tails in calibration data
Let's look at what events look like in the tail of this plot:
What kind of events are, say, above 140?
plotData --h5file ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --runType rtCalibration \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --cuts '("fadc/riseTime", 140.0, Inf)' \ --region crSilver \ --cuts '("rmsTransverse", 0.0, 1.4)' \ --applyAllCuts \ --eventDisplay --septemboard
Looking at these events: it is very easily visible that the root cause of the increased rise time is simply slightly larger than normal noise on the baseline, resulting in a drop 'before' the real rise and extending the signal. This is precisely what the "offset" is intended to combat, but in these cases it doesn't work correctly!
Let's tweak it a bit and see again. We'll rerun the
reconstruction
with an offset of 10% down, just to see what happens.After reconstructing the FADC data, we plot the same event number of the first event (maybe more?) of the first plot in the above PDF: run 239, event 1007
plotData --h5file ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --runType rtCalibration \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --runs 239 \ --events 1007 \ --eventDisplay --septemboard
- -> fixed the event, now rise time of 60!
- run 239, event 1181: -> also fixed.
- run 239, event 1068: -> same.
Let's look at the distribution now:
./fadc_rise_fall_energy_dep -f ~/CastData/data/Calibration2018_Runs.h5 \ --year 2018
============================
fallTime============================
Type: @[("Type", (kind: VString, str: "escape"))] Percentiles: 1-th: 333.38 5-th: 386.0 50-th: 468.0 mean: 463.4154525801297 80-th: 492.0 95-th: 517.0 99-th: 547.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "photo"))] Percentiles: 1-th: 372.0 5-th: 420.0 50-th: 469.0 mean: 466.0160856828016 80-th: 487.0 95-th: 503.0 99-th: 519.0============================
riseTime============================
Type: @[("Type", (kind: VString, str: "escape"))] Percentiles: 1-th: 42.0 5-th: 45.0 50-th: 56.0 mean: 61.34223141272676 80-th: 62.0 95-th: 76.0 99-th: 240.0============================
riseTime============================
Type: @[("Type", (kind: VString, str: "photo"))] Percentiles: 1-th: 44.0 5-th: 46.0 50-th: 55.0 mean: 56.65509178380412 80-th: 59.0 95-th: 64.0 99-th: 114.8699999999953yields:
- We've essentially removed any tail still present in the data!
But does that mean we removed information, i.e. the background case now looks also more similar?
./fadc_rise_fall_signal_vs_background \ -c ~/CastData/data/CalibrationRuns2018_Reco.h5 \ -b ~/CastData/data/DataRuns2018_Reco.h5 \ --year 2018
which yields:
-> Holy crap! I didn't think we could leave the background data this "untouched", but narrow the calibration data as much! It also makes much nicer the fact that the escape and photo peak data have become even more similar! So one cut might after all be almost enough (barring FADC settings etc).
Let's also look at the CDL data again:
- reconstruct it again with new settings
plot it:
./fadc_rise_fall_energy_dep -f ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --year 2019 \ --isCdl
============================
fallTime============================
[110/1676]
Type: @[("Type", (kind: VString, str: "Ag-Ag-6kV"))] Percentiles: 1-th: 359.05 5-th: 410.0 50-th: 471.0 mean: 466.7824639289678 80-th: 493.0 95-th: 510.0 99-th: 524.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Al-Al-4kV"))] Percentiles: 1-th: 352.0 5-th: 392.0 50-th: 470.0 mean: 465.9608457321848 80-th: 500.0 95-th: 527.0 99-th: 548.4699999999998============================
fallTime============================
Type: @[("Type", (kind: VString, str: "C-EPIC-0.6kV"))] Percentiles: 1-th: 204.36 5-th: 254.8 50-th: 375.0 mean: 376.6766428106702 80-th: 445.0 95-th: 501.0 99-th: 540.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Cu-EPIC-0.9kV"))] Percentiles: 1-th: 262.47 5-th: 304.0 50-th: 419.0 mean: 413.0831310679612 80-th: 472.5999999999999 95-th: 508.0 99-th: 548.53============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Cu-EPIC-2kV"))] Percentiles: 1-th: 300.62 5-th: 334.1 50-th: 448.0 mean: 440.717011774259 80-th: 489.0 95-th: 524.0 99-th: 548.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Cu-Ni-15kV"))] Percentiles: 1-th: 344.0 5-th: 395.0 50-th: 462.0 mean: 456.8689866939611 80-th: 481.0 95-th: 496.0 99-th: 509.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Mn-Cr-12kV"))] Percentiles: 1-th: 354.0 5-th: 407.0 50-th: 464.0 mean: 459.7802566345873 80-th: 483.0 95-th: 499.0 99-th: 513.0============================
fallTime============================
Type: @[("Type", (kind: VString, str: "Ti-Ti-9kV"))] Percentiles: 1-th: 367.31 5-th: 417.0 50-th: 472.0 mean: 468.1409001956947 80-th: 492.0 95-th: 508.0 99-th: 524.6899999999996============================
riseTime============================
[15/1676]
Type: @[("Type", (kind: VString, str: "Ag-Ag-6kV"))] Percentiles: 1-th: 41.0 5-th: 44.0 50-th: 54.0 mean: 53.89345172031076 80-th: 59.0 95-th: 63.0 99-th: 67.0============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Al-Al-4kV"))] Percentiles: 1-th: 43.0 5-th: 48.0 50-th: 57.0 mean: 58.23923257635082 80-th: 63.0 95-th: 71.0 99-th: 80.0============================
riseTime============================
Type: @[("Type", (kind: VString, str: "C-EPIC-0.6kV"))] Percentiles: 1-th: 38.0 5-th: 47.0 50-th: 67.0 mean: 68.42680546519193 80-th: 79.0 95-th: 98.0 99-th: 118.0============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Cu-EPIC-0.9kV"))] Percentiles: 1-th: 41.47 5-th: 48.0 50-th: 64.0 mean: 65.94174757281553 80-th: 74.0 95-th: 92.65000000000009 99-th: 116.0599999999999============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Cu-EPIC-2kV"))] Percentiles: 1-th: 44.0 5-th: 49.0 50-th: 61.0 mean: 62.69549330085262 80-th: 71.0 95-th: 82.0 99-th: 96.0============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Cu-Ni-15kV"))] Percentiles: 1-th: 41.0 5-th: 43.0 50-th: 53.0 mean: 54.76941073256324 80-th: 57.0 95-th: 62.0 99-th: 144.6199999999999============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Mn-Cr-12kV"))] Percentiles: 1-th: 41.0 5-th: 43.0 50-th: 53.0 mean: 54.94998541848936 80-th: 57.0 95-th: 62.0 99-th: 152.7200000000012============================
riseTime============================
Type: @[("Type", (kind: VString, str: "Ti-Ti-9kV"))] Percentiles: 1-th: 42.0 5-th: 45.0 50-th: 55.0 mean: 56.16699282452707 80-th: 59.0 95-th: 63.0 99-th: 105.7599999999984yielding: which also gives a lot more 'definition'. Keep in mind that the main important lines are those from Aluminum. These are essentially all more or less the same width with the aluminum in particular maybe a bit wider.
This is pretty good news generally. What I think is going on in detail here is that we see there is an additional "bump" in AgAg6kV, MnCr12kV and a bigger one in CuNi15kV. What do these have in common? They have a longer absorption length and therefore shorter average diffusion! This might actually be the thing we were trying to identify! As there is a larger and larger fraction of these it becomes a significant contribution and not just a 'tail' to lower rise times!
Question: What events are still in the tail of the calibration rise time data? i.e. above rise time of 100 ns? Let's check:
plotData --h5file ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --runType rtCalibration \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --cuts '("fadc/riseTime", 100.0, Inf)' \ --region crSilver \ --cuts '("rmsTransverse", 0.0, 1.4)' \ --applyAllCuts \ --eventDisplay --septemboard
yielding events like this: where we can see that it is almost entirely double hit events. As a small fraction further it is events with a crazy amount of noise. But the double hits make up the biggest fraction.
Does that mean we can filter the data better for our calculation of the percentiles? Ideally we only use single X-rays. Outside of counting the number of clusters on an event, what can we do?
Ah, many of these events are not actually split up and remain a single cluster, which means their eccentricity is very large. But in the plots that produce the rise time KDE we already have a cut on the eccentricity. So I suppose we first need to look at the events that are eccentricity filtered that way as well.
UPDATE: OUCH! The filter of the events in the FADC scripts that read data do not apply the cuts to the eccentricity at all, but only to the transverse RMS dataset by accident!!!!!!! -> Note: immediate impact seems to be essentially nil. There is a small change, but it's really very small.
plotData --h5file ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --runType rtCalibration \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --cuts '("fadc/riseTime", 100.0, Inf)' \ --region crSilver \ --cuts '("rmsTransverse", 0.0, 1.2)' \ --cuts '("eccentricity", 0.0, 1.4)' \ --applyAllCuts \ --eventDisplay --septemboard
where we can see that what is left are events of one of the two cases:
- clearly seperated clusters that are reconstructed as separate clusters
- clusters that are clearly double hits based on the FADC signal, but look like a perfect single cluster in the InGrid data
The latter is an interesting "problem". Theoretically a peak finding algorithm for the FADC data (similar to what used for noise detection) could identify those. But at the same time I feel that we have justification enough to simply cut away any events with a rise time larger X and compute the cut value only based on that. From the gaseous detector physics we know how this behaves. And our data describes our expectation well enough now. So a part of me says we should just take the maximum value and apply some multiplier to its rise time to get a hard cut for the data. Only all data below that will then be used to determine the desired percentile efficiency cut.
- Look at Cu-EPIC-0.9kV events between rise 40-60
8.2.3. Difference in FADC rise times for different FADC amplifier settings
One of the big questions looking at the rise time as a means to improve the FADC veto is the effect of the different FADC amplifier settings used in 2017.
The code above /tmp/fadc_rise_fall_energy_dep.nim
produces a plot
splitting up the different FADC settings if fed with the Run-2 data.
The result is fig. 65. The difference is very stark, implying we definitely need to pick the cut values on an, at least, per setting level.
However, I would have assumed that the distribution of setting 3 (the last one) would match the distribution for run 3, fig. 66. But the peak is at even lower values than even setting 1 (namely below 60!). What. Maybe we didn't rerun the 10 percent offset on calibration data yet? Nope, I checked, all up to date. Maybe the Run-3 data is not? Also up to date.
This brings up the questions whether the effect is not actually a "per setting", but a "per run" effect?
No, that is also not the case. Compare:
The Run-3 data clearly has all runs more or less sharing the same rise times (still though, different cuts may be useful?). And in the Run-2 data we see again more or less 3 distinct distributions.
This begs the question whether we actually ran with an even different setting in Run-3 than at the end of Run-2. This is certainly possible? In the end this is not worth trying to understand in detail. The reason will likely be that. All we care about then is to define cuts that are distinct for each run period & settings. So 4 different cuts in total, 3 for Run-2 and 1 for Run-3.
One weird aspect is the fall time of Run-2 namely the case for setting 2. That setting really seemed to shorten the fall time significantly.
8.2.4. Estimating expected rise times
Generally speaking we should be able to estimate the rise time of the FADC signals from the gaseous detector physics.
The maximum diffusion possible for an X-ray photon should lead to a maximum time that an FADC signal should be long. This time then needs to be folded with the integration time. The result should be an expected FADC signal.
(Note that different energies have different penetration depths on average. The lower the energy the shorter the length in gas, resulting in on average more diffusion)
Going by and from
plotData --h5file ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --runType rtCalibration \ --chips 3 \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --ingrid \ --cuts '("rmsTransverse", 0.1, 1.1)' \ --applyAllCuts \ --region crSilver
we can conclude that typically the length is a bit less than 6 mm and the transverse RMS about 1 mm (which should be what we get from the transverse diffusion coefficient!) So let's go with that number.
Drift velocity is 2 cm·μs⁻¹ implies a drift time for the full X-ray of
import unchained let v = 2.cm•μs⁻¹ let s = 6.mm echo s / v
which are 300 ns. Naively that would equate to 300 clock cycles of the FADC. But our rise times are typically only less than 150, certainly less than 300 clock cycles. How come?
Inversely a time of 150 clock cycles corresponds to about 1.5 μs and so about half the size, namely 3 mm.
The length is related to the transverse diffusion. So what does the longitudinal diffusion look like in comparison? Surely not a factor of 2 off?
Refs:
- Talk that mentions relation of transverse & longitudinal diffusion: Diffusion coefficient: D = 1/3 v λ with longitudinal: 1/3 D and transverse: 2/3 D https://www.physi.uni-heidelberg.de/~sma/teaching/ParticleDetectors2/sma_GasDetectors_1.pdf
- Sauli ./../Papers/Gaseous Radiation Detectors Fundamentals and Applications (Sauli F.) (z-lib.org).pdf on page 92 mentions relation of σL to σT σT = σL / √( 1 + ω²τ²) where ω = EB/m (but our B = 0!?) and τ mean collision time. Naively I would interpret this formula to say σT = σL without a B field though.
Paper about gas properties for LHC related detectors. Contains (not directly comparable) plots of Argon mixtures longitudinal and transverse data: page 18 (fig 9): Ar longitudinal diffusion: Top right plot contains Ar Isobutane, but max 90/10. Best we have though: At 500 V/cm (drift field in detector) all mixtures are about 200 μm/cm. page 22 (fig 13): Ar transverse diffusion. Top right plot, closest listed is again Ar/Iso 90/10. That one at 500V/cm is 350 μm/cm. https://arxiv.org/pdf/1110.6761.pdf
However, the scaling between different mixtures is very large in transverse, but not in longitudinal. Assuming longitudinal is the same in 97.7/2.3 at 200 μm/cm, but transverse keeps jumping, it'll be easily more than twice as high.
- Our old paper (Krieger 2017) https://arxiv.org/pdf/1709.07631.pdf He reports a number of 474 and 500 μm/√cm (square root centimeter??)
[X]
I think just compute with PyBoltz. -> Done.
UNRELATED HERE BUT GOOD INFO: https://arxiv.org/pdf/1110.6761.pdf contains good info on the Townsend coefficient and how it relates to the gas gain! page 11:
Townsend coefficient
The average distance an electron travels between ionizing collisions is called mean free path and its inverse is the number of ionizing collision per cm α (the first Townsend coefficient). This parameter determines the gas gain of the gas. If no is the number of primary electron without amplification in uniform electric field, and n is the number of electrons after distance x under avalanche condition. So n is given by n = noeαx and the gas gain G is given by G = n0/n = eαx. The first Townsend Coefficient depends on the nature of the gas, the electric field and pressure.
(also search for first Townsend coefficient in Sauli, as well as for "T/P") -> Also look at what Townsend coefficient we get for different temperatures using PyBoltz!
Where are we at now?
[ ]
Use our computed values for the longitudinal / transverse diffusion to make a decision about the FADC rise time cut.[ ]
Determine the Townsend coefficient from PyBoltz so that we can compute equivalent numbers to the temperature variation in the amplification region. Can we understand why gain changes the way it does?
Let's start with 1 by computing the values to our best knowledge.
- Pressure in the detector: \(\SI{1050}{mbar} = \SI{787.6}{torr}\)
- Gas: Argon/Isobutane: 97.7 / 2.3 %
- Electric field in drift region: \(\SI{500}{V.cm⁻¹}\)
- Temperature: \(\sim{30}{\degree\celsius}\) the temperature is by far the biggest issue to properly estimate of course. This value is on the higher end for sure, but takes into account that the detector will perform some kind of heating that also affects the gas in the drift region. But because of that we will simply simulate energies in a range.
- PyBoltz setup
To run the above on this machine we need to do:
cd ~/src/python/PyBoltz/ source ~/opt/python3/bin/activate source setup.sh python examples/argon_isobutane_cast.py
where the
setup.sh
file was modified from the shipped version to:#!/usr/bin/env zsh # setup the enviorment export PYTHONPATH=$PYTHONPATH:$PWD export PATH=$PATH:$PWD echo $PYTHONPATH # build the code python3 setup_build.py clean python3 setup_build.py build_ext --inplace
- Diffusion coefficient and drift velocity results for CAST conditions
UPDATE: ./../../CastData/ExternCode/TimepixAnalysis/Tools/septemboardCastGasNimboltz/septemboardGasCastNimBoltz.nim to not depend on brittle Python anymore.
I wrote a version using NimBoltz,The code we use is ./../../CastData/ExternCode/TimepixAnalysis/Tools/septemboardCastGasNimboltz/septemboardGasCastPyboltz.nim which calls
PyBoltz
from Nim and usescligen's
procpool
to multiprocess this. It calculates the gas properties at the above parameters for a range of different temperatures, as this is the main difference we have experienced over the full data taking range.Running the code:
cd $TPA/Tools/septemboardCastGasPyboltz ./septemboardCastGasPyboltz --runPyBoltz
(to re-generate the output data by actually calling
PyBoltz
. Note that this requiresPyBoltz
available for the Python installation at highest priority in your PATH). orcd $TPA/Tools/septemboardCastGasPyboltz ./septemboardCastGasPyboltz --csvInput $TPA/resources/ar_iso_97_7_2_3_septemboard_cast_different_temps.csv
and it yields the following Org table as an output:
E [V•cm⁻¹] T [K] v [mm•μs⁻¹] Δv [mm•μs⁻¹] σTσL [UnitLess] ΔσTσL [UnitLess] σT [μm•√cm] σL [μm•√cm] ΔσT [μm•√cm] ΔσL [μm•√cm] 500 289.2 23.12 0.005422 2.405 0.04274 630.8 262.3 9.013 2.772 500 291.2 23.08 0.004498 2.44 0.05723 633.5 259.7 6.898 5.395 500 293.2 23.02 0.003118 2.599 0.06341 644.4 247.9 9.784 4.734 500 295.2 22.97 0.006927 2.43 0.06669 645.9 265.8 11.54 5.541 500 297.2 22.91 0.004938 2.541 0.05592 651.2 256.3 9.719 4.147 500 299.2 22.87 0.006585 2.422 0.05647 644.2 266 8.712 5.05 500 301.2 22.83 0.005237 2.362 0.06177 634.9 268.8 8.775 5.966 500 303.2 22.77 0.004026 2.539 0.07082 666.6 262.5 11.95 5.611 500 305.2 22.72 0.006522 2.492 0.07468 657.6 263.9 11.2 6.507 500 307.2 22.68 0.006308 2.492 0.05062 636.6 255.4 7.968 4.085 500 309.2 22.64 0.006007 2.472 0.06764 664.6 268.8 12.21 5.45 500 311.2 22.6 0.00569 2.463 0.05762 657.9 267.1 9.425 4.94 500 313.2 22.55 0.006531 2.397 0.0419 662.1 276.2 9.911 2.492 500 315.2 22.51 0.003245 2.404 0.04582 654.7 272.4 6.913 4.323 500 317.2 22.46 0.005834 2.593 0.07637 682 263 12.92 5.929 500 319.2 22.42 0.006516 2.594 0.06435 681.8 262.8 9.411 5.417 500 321.2 22.38 0.003359 2.448 0.05538 670.2 273.7 8.075 5.239 500 323.2 22.34 0.004044 2.525 0.08031 677.5 268.3 11.4 7.244 500 325.2 22.3 0.005307 2.543 0.06627 677.6 266.5 12.87 4.755 500 327.2 22.26 0.007001 2.465 0.05675 682.3 276.8 8.391 5.387 500 329.2 22.22 0.002777 2.485 0.07594 679.1 273.3 12.39 6.701 500 331.2 22.19 0.004252 2.456 0.06553 667.3 271.7 10 5.995 500 333.2 22.15 0.004976 2.563 0.06788 687.5 268.2 12.78 5.059 500 335.2 22.11 0.004721 2.522 0.06608 702 278.4 12.24 5.446 500 337.2 22.07 0.00542 2.467 0.09028 676.4 274.1 11.17 8.952 500 339.2 22.03 0.003971 2.527 0.04836 678.8 268.6 12.36 1.577 500 341.2 21.99 0.005645 2.575 0.07031 697 270.7 9.502 6.403 500 343.2 21.96 0.005118 2.535 0.06872 696.6 274.8 10.09 6.297 and these plots:
showing the drift velocity, transverse & longitudinal diffusion coefficients and the ratio of the two coefficients against the temperature.
The data file generated (essentially the above table) is available here:
- ./../../CastData/ExternCode/TimepixAnalysis/resources/ar_iso_97_7_2_3_septemboard_cast_different_temps.csv
- ./../resources/ar_iso_97_7_2_3_septemboard_cast_different_temps.csv
- ./../../phd/resources/ar_iso_97_7_2_3_septemboard_cast_different_temps.csv
(and by extension on Github).
- Computing an expected rise time from gas properties
Now that we know the properties of our gas, we can compute the expected rise times.
What we need are the following things:
- drift velocity
- transverse diffusion
- detector height
- (optional as check) length of the X-ray clusters
- longitudinal diffusion
The basic idea is just:
- based on height of detector compute:
- maximum transverse diffusion (which we can cross check!)
- maximum longitudinal diffusion a) based on gas property numbers b) based on known length data by scaling σT over σL
- maximum longitudinal length corresponds to a maximum time possibly seen in the drift through the grid
- this max time corresponds to an upper limit on rise times for real X-rays!
Let's do this by reading the CSV file of the gas and see where we're headed. When required we will pick a temperature of 26°C to be on the warmer side, somewhat taking into account the fact that the septemboard should itself heat the gas somewhat (it might actually be more in reality!).
[ ]
MOVE THIS TO THESIS WHEN DONE!
NOTE: The code below is a bit tricky, as the handling of units in measuremancer is still problematic & the fact that unchained does neither support centigrade nor square root units!
import datamancer, unchained, measuremancer # first some known constants const FadcClock = 1.GHz DetectorHeight = 3.cm let MaxEmpiricalLength = 6.mm ± 0.5.mm # more or less! let df = readCsv("/home/basti/phd/resources/ar_iso_97_7_2_3_septemboard_cast_different_temps.csv") # compute the mean value (by accumulating & dividing to propagate errors correctly) var σT_σL: Measurement[float] for i in 0 ..< df.len: σT_σL += df["σT_σL [UnitLess]", i, float] ± df["ΔσT_σL [UnitLess]", i, float] σT_σL = σT_σL / (df.len.float) # Note: the temperature is centigrade and not kelvin as the header implies, oops. let df26 = df.filter(f{float -> bool: abs(idx("T [K]") - (26.0 + 273.15)) < 1e-4}) let v = df26["v [mm•μs⁻¹]", float][0].mm•μs⁻¹ ± df26["Δv [mm•μs⁻¹]", float][0].mm•μs⁻¹ let σT = df26["σT [μm•√cm]", float][0] ± df26["ΔσT [μm•√cm]", float][0] let σL = df26["σL [μm•√cm]", float][0] ± df26["ΔσL [μm•√cm]", float][0] # 1. compute the maximum transverse and longitudinal diffusion we expect # deal with the ugly sqrt units of the regular coefficient let maxDiffTransverse = (σT * sqrt(DetectorHeight.float) / 1000.0 * 1.0.mm) # manual conversion from μm to mm let maxDiffLongitudinal = (σL * sqrt(DetectorHeight.float) / 1000.0 * 1.0.mm) echo "Maximum transverse diffusion = ", maxDiffTransverse echo "Maximum longitudinal diffusion = ", maxDiffLongitudinal # however, the diffusion gives us only the `1 σ` of the diffusion. For that # reason it matches pretty much exactly with the transverse RMS data we have from # our detector! # First of all the length of the cluster will be twice the sigma (sigma is one sided!) # And then not only a single sigma, but more like ~3. let maxClusterSize = 3 * (2 * maxDiffTransverse) let maxClusterHeight = 3 * (2 * maxDiffLongitudinal) echo "Expected maximum (transverse) length of a cluster = ", maxClusterSize echo "Expected maximum longitudinal length of a cluster = ", maxClusterHeight # this does actually match our data of peaking at ~6 mm reasonably well. # From this now let's compute the expected longitudinal length using # the known length data and the known fraction: let maxEmpiricalHeight = MaxEmpiricalLength / σT_σL echo "Empirical limit on the cluster height = ", maxEmpiricalHeight # and finally convert these into times from a clock frequency ## XXX: Converson from micro secnd to nano second is broken in Measuremancer! ## -> it's not broken, but `to` is simply not meant for Unchained conversions yet. ## I also think something related to unit conversions in the errors is broken! ## -> Math is problematic with different units as of now.. Our forced type conversions ## in measuremancer remove information! ## -> maybe use `to` correctly everywhere? Well, for now does not matter. let maxTime = (maxClusterHeight / v) # * (1.0 / FadcClock).to(Meter) echo "Max rise time = ", maxTime # and from the empirical conversion: let maxEmpiricalTime = (maxEmpiricalHeight / v) # * (1.0 / FadcClock).to(Meter) echo "Max empirical rise time = ", maxEmpiricalTime
Investigate the errors on the maximum rise time!
[X]
Done: Issue is that measuremancer screws up errors because of forced type conversion for errors.
From the above we can see that we expect a maximum rise time of something like 121 ns (clock cycles) from theory and if we use our empirical results about 105 ns.
These numbers match quite well with our median / mean and percentile values in the above section!
[ ]
COMPUTE OUR GAS TEMPERATURE FROM RISE TIME &rmsTransverse
PROPERTY -> What temperature matches best to our measured transverse RMS? And our rise time? Is the gas property impact of variation in temperature even big enough or are other impacts (e.g. field distortions etc) more likely?
- Estimating typical diffusion distances
The typical distance that an X-ray of a known energy drifts in the first place depends on the typical absorption length in the material. If we look at the transverse RMS of our data, e.g.
we see a peak at about 1 mm. However, what does this RMS correspond to? It corresponds to those X-rays of the typical drift distance. And that is the typical distance of a 5.9 keV photon. So let's compute the typical absorption distance of such a photon to get a correction:
import xrayAttenuation, unchained, datamancer let ar = Argon.init() let ρ_Ar = density(1050.mbar.to(Pascal), 293.K, ar.molarMass) let E = 5.9.keV let dist = absorptionLength(E, numberDensity(ρ_Ar, ar.molarMass), ar.f2eval(E)) echo "Dist = ", dist, " in cm ", dist.to(cm)
2 cm??? -> Yes, this is correct and lead to the discussion in sec. 3.3.2! It all makes sense if one properly simulates it (the absorption is an 'exponential decay' after all!)
The big takeaway from looking at the correct distribution of cluster sizes given the absorption length is really that there are still a significant fraction of X-rays that diffuse to essentially the "cutoff" value. At very large values of λ > 2 this does lead to a general trend to smaller clusters than calculated based on the full 3 cm drift, but only by 10-20% at most.
- Preliminary results after first run of simple code snippet
The text below was written from the initial results we got from the very first snippet we ran for a single temperature in Python. The Nim code snippet printed here is already a second version that gets close to what we ended up running finally. The results below the snippet are from the very first Python run for a single data point!
The setup to have the PyBoltz library available (e.g. via a virtualenv) is of course also needed here.
The following is the script to run this code. It needs the
PyBoltz
library installed of course. (available in our virtualenv).[X]
Better rewrite the below as a Nim script, then increase number of collisions (does that increase accuracy?) and useprocpool
to run 32 of these simulations (for different temps for example) at the same time. Also makes it much easier to deal with the data… -> Done.
import ggplotnim, unchained, measuremancer, nimpy import std / [strformat, json, times] defUnit(V•cm⁻¹) defUnit(cm•μs⁻¹) type MagRes = object E: V•cm⁻¹ T: K v: Measurement[cm•μs⁻¹] σT: Measurement[float] # μm²•cm⁻¹] # we currently do not support √unit :( σL: Measurement[float] # μm²•cm⁻¹] proc toMagRes(res: PyObject, temp: Kelvin): MagRes = result = MagRes(T: temp) let v = res["Drift_vel"].val[2].to(float) let Δv = res["Drift_vel"].err[2].to(float) result.v = v.cm•μs⁻¹ ± Δv.cm•μs⁻¹ # now get diffusion coefficients for a single centimeter (well √cm) let σ_T1 = res["DT1"].val.to(float) let Δσ_T1 = res["DT1"].err.to(float) result.σT = (σ_T1 ± Δσ_T1) let σ_L1 = res["DL1"].val.to(float) let Δσ_L1 = res["DL1"].err.to(float) result.σL = (σ_L1 ± Δσ_L1) proc `$`(m: MagRes): string = result.add &"T = {m.T}" result.add &"σ_T1 = {m.σT} μm·cm⁻⁰·⁵" result.add &"σ_L1 = {m.σL} μm·cm⁻⁰·⁵" proc toDf(ms: seq[MagRes]): DataFrame = let len = ms.len result = newDataFrame() for m in ms: var df = newDataFrame() for field, data in fieldPairs(m): when typeof(data) is Measurement: let uof = unitOf(data.value) let unit = &" [{uof}]" df[field & unit] = data.value.float df["Δ" & field & unit] = data.error.float else: let uof = unitOf(data) let unit = &" [{uof}]" df[field & unit] = data.float result.add df let pb = pyImport("PyBoltz.PyBoltzRun") # Set up helper object let PBRun = pb.PyBoltzRun() # Configure settings for our simulation var Settings = %* { "Gases" : ["ARGON","ISOBUTANE"], "Fractions" : [97.7,2.3], "Max_collisions" : 4e7, "EField_Vcm" : 500, "Max_electron_energy" : 0, "Temperature_C" : 30, "Pressure_Torr" : 787.6, "BField_Tesla" : 0, "BField_angle" : 0, "Angular_dist_model" : 1, "Enable_penning" : 0, "Enable_thermal_motion" : 1, "ConsoleOutputFlag" : 1} let t0 = epochTime() var res = newSeq[MagRes]() let temps = arange(14.0, 36.0, 2.0) for temp in temps: Settings["Temperature_C"] = % temp # commence the run! res.add(PBRun.Run(Settings).toMagRes((temp + 273.15).K)) let t1 = epochTime() echo "time taken = ", t1-t0 echo res[^1] let df = res.toDf() echo df.toOrgTable()
The output of the above is
Input Decor_Colls not set, using default 0 Input Decor_LookBacks not set, using default 0 Input Decor_Step not set, using default 0 Input NumSamples not set, using default 10 Trying 5.6569 Ev for final electron energy - Num analyzed collisions: 3900000 Calculated the final energy = 5.6568542494923815 Velocity Position Time Energy DIFXX DIFYY DIFZZ 22.7 0.3 11464558.7 1.1 3854.8 20838.4 0.0 22.7 0.5 22961894.0 1.1 8647.8 12018.0 0.0 22.7 0.8 34532576.4 1.1 7714.7 12014.1 202.4 22.7 1.0 46113478.7 1.1 6105.2 11956.1 641.4 22.7 1.3 57442308.9 1.1 5840.9 9703.4 739.7 22.8 1.6 68857082.2 1.1 7759.2 8817.9 608.6 22.8 1.8 80311917.6 1.1 7648.9 8248.2 574.8 22.8 2.1 91754361.3 1.1 7184.4 7322.1 611.2 22.8 2.3 103265642.2 1.1 7569.3 6787.9 656.5 22.8 2.6 114853263.8 1.1 7298.9 6968.7 764.8 time taken 103.45310592651367 σ_T = 7133.782820393255 ± 1641.0801103506417 σ_L = 764.8143711898326 ± 160.83754918535615 σ_T1 = 791.8143345965309 ± 91.07674206533429 σ_L1 = 259.263534784396 ± 27.261384253733745
What we gleam from this is that the diffusion coefficients we care about (namely the
*1
versions) are:\[ σ_T = 791.8 ± 91.1 μm·√cm \]
and
\[ σ_L = 259.26 ± 27.3 μm·√cm \]
which turns out to be a ratio of:
\[ \frac{σ_T}{σ_L} = 3.05 \]
So surprisingly the transverse diffusion is a full factor 3 larger than the longitudinal diffusion!
In addition we can read off the drift velocity of \(\SI{2.28}{cm·μs⁻¹}\).
The main output being:
T v σT σL 14.0 23.12221717549076 ± 0.04451635054497993 720.4546229571265 ± 92.25686062895952 255.27791834496637 ± 25.183524735291876 16.0 23.103731285486774 ± 0.03595271262006956 616.587833132368 ± 53.89931070654909 222.14061731499962 ± 17.837640243065074 18.0 23.076301420096588 ± 0.036605366202092225 645.537278659896 ± 64.64445968202027 275.71926338282447 ± 37.91257063146355 20.0 22.997513931669804 ± 0.025816774253200406 640.31721992396 ± 68.9113486411086 236.43873330018673 ± 36.02017572086169 22.0 22.932268231504192 ± 0.0328347862828518 615.8682550046013 ± 74.24682912210032 242.31515490459608 ± 28.05523660699701 24.0 22.871239000070037 ± 0.04255711577757762 742.2002296818248 ± 72.94318786860077 263.0814747275606 ± 34.624811170582795 26.0 22.833848724962852 ± 0.03087355168336172 626.8271546734144 ± 67.1554564961464 260.00659390651487 ± 32.456334414972844 28.0 22.79969666113236 ± 0.04420068652428081 614.782404723097 ± 51.838235017654526 246.12174320906414 ± 29.60789215566301 30.0 22.72279250815483 ± 0.03699016950129097 698.6046486427862 ± 79.6459139815396 260.90895307103534 ± 27.98241664934684 32.0 22.72745917196911 ± 0.03166545537801199 681.6978915408016 ± 76.97738468648261 260.3776539762865 ± 31.627440708563316 34.0 22.60977721661218 ± 0.03555585123344388 621.0265075081438 ± 73.80599488874776 279.7425000247473 ± 29.957402479193366 The preliminary result for sure is though that this does indeed give a reasonably good explanation for why the rise time for X-rays is only of the order of 100 instead of 300 (as ~expected from drift velocity).
8.2.5. About the fall time
The fall time is dominated by the RC characteristics of the FADC readout chain.
The problem here is that we lack information. The FADC readout happens via a \(\SI{10}{nF}\) capacitor. However, we don't really know anything about the resistance and or whatever the real numbers are.
From the schematic from Deisting's MSc thesis we can gleam a resistor of \(\SI{12}{\mega\ohm}\) and a capacitor of \(\SI{470}{pF}\). These together still give an RC time of:
import unchained let R = 12.MΩ let C = 470.pF #10.nF echo "τ = ", R*C
about 5.6 ms! Waaayyy too long. So these are clearly not the relevant pieces of information. We'd likely need
8.3. Updating the FADC algorithms for rise & fall time as well as data structure
We've ended up performing the following changes in the commits from
.updated the algorithm that computes the rise and fall time of the FADC data such that we don't start from the minimum register, but an offset away from the mean minimum value.
const PercentileMean = 0.995 # 0.5% = 2560 * 0.005 = 12.8 registers around the minimum for the minimum val const OffsetToBaseline = 0.025 # 2.5 % below baseline seems reasonable let meanMinVal = calcMinOfPulse(fadc, PercentileMean) let offset = abs(OffsetToBaseline * (meanMinVal - baseline)) # relative to the 'amplitude' # ... (riseStart, riseStop) = findThresholdValue(fadc, xMin, meanMinVal + offset, baseline - offset)
where
meanMinVal + offset
is the lower threshold we need to cross before we start counting the rise or fall times. ThexMin
in that sense is only a reference. The real calculation of the threshold is based on the minimum of the pulse using a 0.5% signal width around the minimum.Further, we change the data types that store the FADC data in order to change the workflow in which the FADC data is reconstruct