1. Journal (day to day) extended
1.1. [1/1]
[X]
arraymancer NN DSL PR
1.2. [3/5]
[X]
write mail to CAST PC about talk at Patras.[ ]
implement MCMC multiple chains starting & mean value of them for limit calculation[ ]
fix segfault when multithreading[X]
compute correct "depth" for raytracing focal spot[X]
split window strongback from signal & position uncertainty Implemented the axion image w/o window & strongback separately. Still have to be implemented into limit calc.
1.3. [2/4]
[X]
implement MCMC multiple chains starting & mean value of them for limit calculation[ ]
fix segfault when multithreading[X]
implement strongback / signal split into limit calc[ ]
Timepix3 background rate!
1.4.
Questions for meeting with Klaus today:
- Did you hear something from Igor? -> Nope he hasn't either. Apparently Igor is very busy currently. But Klaus doesn't think there will be any showstoppers regarding making the data available.
- For reference distributions and logL morphing: We morph bin wise on pre-binned data. This leads to jumps in the logL cut value. Maybe a good idea after all not use a histogram, but a smooth KDE? Unbinned is not directly possible, because we don't have data to compute an unbinned distribution for everything outside main fluorescence lines! -> Klaus had a good idea here: We can estimate the systematic effect of our binning by moving the bin edges by half a bin width to the left / right and computing the expected limit based on these. If the effective limit changes, we know there is some systematic effect going on. More likely though, the expected limit remains unchanged (within variance) and therefore the systematic impact is smaller than the variance of the limit.
About septem veto and line veto: What to do with random coincidences? Is it honest to use those clusters? -> Klaus had an even better idea here: we can estimate the dead time by doing the following:
- read full septemboard data
- shuffle center + outer chip event data around such that we know the two are not correlated
- compute the efficiency of the septem veto.
In theory 0% of all events should trigger either the septem or the line veto. The percentage that does anyway is our random coincidence!
1.5.
All files that were in /tmp/playground
(/t/playground
) referenced
here and in the meeting notes are backed up in
~/development_files/07_03_2023/playground
(to not make sure we lose something / for reference to recreate some
in development behavior etc.)
Just because I'll likely shut down the computer for the first time in 26 days soon and not sure if everything was backed up from there. I believe so, but who knows.
1.6.
Let's rerun the likelihood
after adding the tracking information
back to the H5 files and fixing how the total duration is calculated
from the data files.
Previously we used the total duration in every case, even when excluding tracking information and thus having less time in actuality. 'Fortunately', all background rate plots in the thesis as of today ran without any tracking info in the H5 files, meaning they include the solar tracking itself. Therefore the total duration is correct in those cases.
Run-2 testing (all vetoes):
likelihood -f ~/CastData/data/DataRuns2017_Reco.h5 \ --h5out /tmp/playground/test_run2.h5 \ --region crGold \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --lineveto --scintiveto --fadcveto --septemveto \ --calibFile ~/CastData/data/CalibrationRuns2017_Reco.h5
Run-3 testing (all vetoes):
likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/playground/test_run3.h5 \ --region crGold \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --lineveto --scintiveto --fadcveto --septemveto \ --calibFile ~/CastData/data/CalibrationRuns2018_Reco.h5
The likelihood outputs are here: ./resources/background_rate_test_correct_time_no_tracking/
Background:
plotBackgroundRate \ /tmp/playground/test_run2.h5 \ /tmp/playground/test_run3.h5 \ --combName 2017/18 \ --combYear 2017 \ --centerChip 3 \ --title "Background rate from CAST data, all vetoes" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_crGold_all_vetoes.pdf \ --outpath /tmp/playground/ \ --quiet
The number coming out (see title of the generated plot) is now 3158.57 h, which matches our (new :( ) expectation.
which is also in the same directory: ./resources/background_rate_test_correct_time_no_tracking/background_rate_crGold_all_vetoes.pdf
[X]
RerunwriteRunList
and update thestatusAndProgress
andthesis
tables about times![ ]
Update data in
thesis
! Run-2:./writeRunList -b ~/CastData/data/DataRuns2017_Reco.h5 -c ~/CastData/data/CalibrationRuns2017_Reco.h5
Type: rtBackground total duration: 14 weeks, 6 days, 11 hours, 25 minutes, 59 seconds, 97 milliseconds, 615 microseconds, and 921 nanoseconds In hours: 2507.433082670833 active duration: 2238.783333333333 trackingDuration: 4 days, 10 hours, and 20 seconds In hours: 106.0055555555556 active tracking duration: 94.12276972527778 nonTrackingDuration: 14 weeks, 2 days, 1 hour, 25 minutes, 39 seconds, 97 milliseconds, 615 microseconds, and 921 nanoseconds In hours: 2401.427527115278 active background duration: 2144.666241943055
Solar tracking [h] Background [h] Active tracking [h] Active background [h] Total time [h] Active time [h] 106.006 2401.43 94.1228 2144.67 2507.43 2238.78 Type: rtCalibration total duration: 4 days, 11 hours, 25 minutes, 20 seconds, 453 milliseconds, 596 microseconds, and 104 nanoseconds In hours: 107.4223482211111 active duration: 2.601388888888889 trackingDuration: 0 nanoseconds In hours: 0.0 active tracking duration: 0.0 nonTrackingDuration: 4 days, 11 hours, 25 minutes, 20 seconds, 453 milliseconds, 596 microseconds, and 104 nanoseconds In hours: 107.4223482211111 active background duration: 2.601391883888889
Solar tracking [h] Background [h] Active tracking [h] Active background [h] Total time [h] Active time [h] 0 107.422 0 2.60139 107.422 2.60139 Run-3:
./writeRunList -b ~/CastData/data/DataRuns2018_Reco.h5 -c ~/CastData/data/CalibrationRuns2018_Reco.h5
Type: rtBackground total duration: 7 weeks, 23 hours, 13 minutes, 35 seconds, 698 milliseconds, 399 microseconds, and 775 nanoseconds In hours: 1199.226582888611 active duration: 1079.598333333333 trackingDuration: 3 days, 2 hours, 17 minutes, and 53 seconds In hours: 74.29805555555555 active tracking duration: 66.92306679361111 nonTrackingDuration: 6 weeks, 4 days, 20 hours, 55 minutes, 42 seconds, 698 milliseconds, 399 microseconds, and 775 nanoseconds In hours: 1124.928527333056 active background duration: 1012.677445774444
Solar tracking [h] Background [h] Active tracking [h] Active background [h] Total time [h] Active time [h] 74.2981 1124.93 66.9231 1012.68 1199.23 1079.6 Type: rtCalibration total duration: 3 days, 15 hours, 3 minutes, 47 seconds, 557 milliseconds, 131 microseconds, and 279 nanoseconds In hours: 87.06321031416667 active duration: 3.525555555555556 trackingDuration: 0 nanoseconds In hours: 0.0 active tracking duration: 0.0 nonTrackingDuration: 3 days, 15 hours, 3 minutes, 47 seconds, 557 milliseconds, 131 microseconds, and 279 nanoseconds In hours: 87.06321031416667 active background duration: 3.525561761944445
Solar tracking [h] Background [h] Active tracking [h] Active background [h] Total time [h] Active time [h] 0 87.0632 0 3.52556 87.0632 3.52556
[X]
Rerun the
createAllLikelihoodCombinations
now that tracking information is there. -> Currently running (Update: We could now combine the below with the one further down that excludes the FADC!)../createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crGold --regions crAll \ --vetoSets "{fkScinti, fkFadc, fkSeptem, fkLineVeto, fkExclusiveLineVeto}" \ --fadcVetoPercentiles 0.9 --fadcVetoPercentiles 0.95 --fadcVetoPercentiles 0.99 \ --out /t/lhood_outputs_adaptive_fadc \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing
Found here: ./resources/lhood_limits_automation_correct_duration/
[ ]
Now generate the other likelihood outputs we need for more expected limit cases from sec. [BROKEN LINK: sec:meetings:10_03_23] in StatusAndProgress:
[ ]
Calculate expected limits also for the following cases:[X]
Septem, line combinations without the FADC[ ]
Best case (lowest row of below) with lnL efficiencies of:[ ]
0.7[ ]
0.9
The former (septem, line without FADC) will be done using
createAllLikelihoodCombinations
:./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crGold --regions crAll \ --vetoSets "{+fkScinti, fkSeptem, fkLineVeto, fkExclusiveLineVeto}" \ --out /t/lhood_outputs_adaptive_fadc \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing
For simplicity, this will regenerate some of the files already generated (i.e. the no vetoes & the scinti case)
These files are also found here: ./resources/lhood_limits_automation_correct_duration/ together with a rerun of the regular cases above.
Plot the background clusters to see if we indeed have less over the whole chip.
plotBackgroundClusters \ /t/lhood_outputs_adaptive_duplicated_fadc_stuff/likelihood_cdl2018_Run2_crAll_scinti_vetoPercentile_0.99_fadc_vetoPercentile_0.99_septem_vetoPercentile_0.99_line_vetoPercentile_0.99.h5 \ /t/lhood_outputs_adaptive_duplicated_fadc_stuff/likelihood_cdl2018_Run3_crAll_scinti_vetoPercentile_0.99_fadc_vetoPercentile_0.99_septem_vetoPercentile_0.99_line_vetoPercentile_0.99.h5 \ --zMax 5 \ --title "X-ray like clusters of CAST data after all vetoes" \ --outpath /tmp/playground/ \ --filterNoisyPixels \ --filterEnergy 12.0 \ --suffix "_all_vetoes"
Available here: resources/background_rate_test_correct_time_no_tracking/background_cluster_centers_all_vetoes.pdf
Where we can see that indeed we now have less than 10,000 clusters compared to the ~10,500 we had when using all data (including tracking).
1.7.
Continue from yesterday:
[X]
Now generate the other likelihood outputs we need for more expected limit cases from sec. [BROKEN LINK: sec:meetings:10_03_23] in StatusAndProgress: -> All done, path to files below.
[X]
Calculate expected limits also for the following cases:[X]
Septem, line combinations without the FADC (done yesterday)[X]
Best case (lowest row of below) with lnL efficiencies of:[X]
0.7[X]
0.9
The latter has now also been implemented as functionality in
likelihood
andcreateAllLikelihoodCombinations
(adjust signal efficiency from command line and add options to runner). Now run:./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crGold --regions crAll \ --signalEfficiency 0.7 --signalEfficiency 0.9 \ --vetoSets "{+fkScinti, +fkFadc, +fkSeptem, fkLineVeto}" \ --fadcVetoPercentile 0.9 \ --out /t/lhood_outputs_adaptive_fadc \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing \ --dryRun
to reproduce numbers for the best expected limit case together with a lnL signal efficiency of 70 and 90%.
Finally, these files are also in: ./resources/lhood_limits_automation_correct_duration/ which means now all the setups we initially care about are there.
Let's look at the background rate we get from the 70% vs the 90% case:
plotBackgroundRate \ likelihood_cdl2018_Run2_crGold_signalEff_0.7_scinti_fadc_septem_line_vetoPercentile_0.9.h5 \ likelihood_cdl2018_Run3_crGold_signalEff_0.7_scinti_fadc_septem_line_vetoPercentile_0.9.h5 \ likelihood_cdl2018_Run2_crGold_signalEff_0.9_scinti_fadc_septem_line_vetoPercentile_0.9.h5 \ likelihood_cdl2018_Run3_crGold_signalEff_0.9_scinti_fadc_septem_line_vetoPercentile_0.9.h5 \ --names "0.7" --names "0.7" \ --names "0.9" --names "0.9" \ --centerChip 3 \ --title "Background rate from CAST data, incl. all vetoes, 70vs90" \ --showNumClusters --showTotalTime \ --topMargin 1.5 --energyDset energyFromCharge \ --outfile background_rate_cast_all_vetoes_70p_90p.pdf \ --outpath . \ --quiet
[INFO]:Dataset: 0.7 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 4.1861e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 3.4884e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: 0.9 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 9.3221e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 7.7684e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: 0.7 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 8.6185e-06 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 4.3093e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: 0.9 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 1.4775e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 7.3873e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: 0.7 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.8116e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 4.0259e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: 0.9 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 3.4650e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 7.7000e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: 0.7 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 1.4423e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 5.7691e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: 0.9 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 2.4273e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 9.7090e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: 0.7 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 4.3972e-06 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 1.0993e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: 0.9 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 1.1785e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 2.9461e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: 0.7 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 2.7790e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 3.4738e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: 0.9 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 5.3998e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 6.7497e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: 0.7 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 1.3895e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 2.3159e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: 0.9 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 3.0956e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 5.1594e-06 keV⁻¹·cm⁻²·s⁻¹
yielded which shows quite the incredibly change especially in the 8 keV peak!
And in the 4 to 8 keV range we even almost got to the 1e-7 range for the 70% case (however note that in this case the total efficiency is only about 40% or so!).
[X]
Verify that the signal efficiency used is written to output logL file[X]
If not implement -> It was not, now implemented.
[X]
read signal efficiency from logL file in mcmclimit and stop using the efficiency includingε
in the context. Instead merge with the calculator for veto efficiencies. -> Implemented.
From meeting notes:
[ ]
Verify that those elements with lower efficiency indeed have \(R_T = 0\) at higher values! -> Just compute \(R_T = 0\) for all input files and output result, easiest.
1.8.
Old model from March 2022 ./resources/mlp_trained_march2022.pt :
Test set: Average loss: 0.9876 | Accuracy: 0.988 Cut value: 2.483305978775025 Test set: Average loss: 0.9876 | Accuracy: 0.988 Total efficiency = 0.8999892098945267 Test set: Average loss: 0.9995 | Accuracy: 0.999 Target: Ag-Ag-6kV eff = 0.9778990694345026 Test set: Average loss: 0.9916 | Accuracy: 0.992 Target: Al-Al-4kV eff = 0.9226669690441093 Test set: Average loss: 0.9402 | Accuracy: 0.940 Target: C-EPIC-0.6kV eff = 0.6790938280413843 Test set: Average loss: 0.9941 | Accuracy: 0.994 Target: Cu-EPIC-0.9kV eff = 0.8284986713906112 Test set: Average loss: 0.9871 | Accuracy: 0.987 Target: Cu-EPIC-2kV eff = 0.8687534321801208 Test set: Average loss: 1.0000 | Accuracy: 1.000 Target: Cu-Ni-15kV eff = 0.9939449541284404 Test set: Average loss: 0.9999 | Accuracy: 1.000 Target: Mn-Cr-12kV eff = 0.9938112429087158 Test set: Average loss: 1.0000 | Accuracy: 1.000 Target: Ti-Ti-9kV eff = 0.9947166683932456
New model from yesterday ./resources/mlp_trained_bsz8192_hidden_5000.pt :
Test set: Average loss: 0.9714 | Accuracy: 0.971 Cut value: 1.847297704219818 Test set: Average loss: 0.9714 | Accuracy: 0.971 Total efficiency = 0.8999892098945267 Test set: Average loss: 0.9945 | Accuracy: 0.994 Target: Ag-Ag-6kV eff = 0.9525769506084467 Test set: Average loss: 0.9804 | Accuracy: 0.980 Target: Al-Al-4kV eff = 0.9097403333711305 Test set: Average loss: 0.8990 | Accuracy: 0.899 Target: C-EPIC-0.6kV eff = 0.7640920442383161 Test set: Average loss: 0.9584 | Accuracy: 0.958 Target: Cu-EPIC-0.9kV eff = 0.8211913197519929 Test set: Average loss: 0.9636 | Accuracy: 0.964 Target: Cu-EPIC-2kV eff = 0.8543657331136738 Test set: Average loss: 0.9980 | Accuracy: 0.998 Target: Cu-Ni-15kV eff = 0.981651376146789 Test set: Average loss: 0.9978 | Accuracy: 0.998 Target: Mn-Cr-12kV eff = 0.9807117070654977 Test set: Average loss: 0.9982 | Accuracy: 0.998 Target: Ti-Ti-9kV eff = 0.9776235367243344
Calculated with determineCdlEfficiency
.
To get a background rate estimate we use the simple functionality in
the NN training tool itself. Note that it needs the total time as an
input to scale the data correctly (hardcoded has a value of Run-2, but
for Run-3 need to use the totalTime
argument!)
Background rate at 95% for Run-2 data:
cd ~/CastData/ExternCode/TimepixAnalysis/Tools/NN_playground
./train_ingrid -f ~/CastData/data/DataRuns2017_Reco.h5 --ε 0.95
which yields:
Background rate at 90% for Run-2 data:
cd ~/CastData/ExternCode/TimepixAnalysis/Tools/NN_playground
./train_ingrid -f ~/CastData/data/DataRuns2017_Reco.h5 --ε 0.9
Test set: Average loss: 0.9714 | Accuracy: 0.971 Cut value: 1.847297704219818
Test set: Average loss: 0.9714 | Accuracy: 0.971 Total efficiency = 0.8999892098945267
Test set: Average loss: 0.9945 | Accuracy: 0.994 Target: Ag-Ag-6kV eff = 0.9525769506084467
Test set: Average loss: 0.9804 | Accuracy: 0.980 Target: Al-Al-4kV eff = 0.9097403333711305
Test set: Average loss: 0.8990 | Accuracy: 0.899 Target: C-EPIC-0.6kV eff = 0.7640920442383161
Test set: Average loss: 0.9584 | Accuracy: 0.958 Target: Cu-EPIC-0.9kV eff = 0.8211913197519929
Test set: Average loss: 0.9636 | Accuracy: 0.964 Target: Cu-EPIC-2kV eff = 0.8543657331136738
Test set: Average loss: 0.9980 | Accuracy: 0.998 Target: Cu-Ni-15kV eff = 0.981651376146789
Test set: Average loss: 0.9978 | Accuracy: 0.998 Target: Mn-Cr-12kV eff = 0.9807117070654977
Test set: Average loss: 0.9982 | Accuracy: 0.998 Target: Ti-Ti-9kV eff = 0.9776235367243344
which yields:
Background rate at 80% for Run-2 data:
cd ~/CastData/ExternCode/TimepixAnalysis/Tools/NN_playground
./train_ingrid -f ~/CastData/data/DataRuns2017_Reco.h5 --ε 0.8
Test set: Average loss: 0.9714 | Accuracy: 0.971 Cut value: 3.556154251098633
Test set: Average loss: 0.9714 | Accuracy: 0.971 Total efficiency = 0.799991907420895
Test set: Average loss: 0.9945 | Accuracy: 0.994 Target: Ag-Ag-6kV eff = 0.856209735146743
Test set: Average loss: 0.9804 | Accuracy: 0.980 Target: Al-Al-4kV eff = 0.7955550515931511
Test set: Average loss: 0.8990 | Accuracy: 0.899 Target: C-EPIC-0.6kV eff = 0.6546557260078487
Test set: Average loss: 0.9584 | Accuracy: 0.958 Target: Cu-EPIC-0.9kV eff = 0.7030558015943313
Test set: Average loss: 0.9636 | Accuracy: 0.964 Target: Cu-EPIC-2kV eff = 0.7335529928610653
Test set: Average loss: 0.9980 | Accuracy: 0.998 Target: Cu-Ni-15kV eff = 0.9102752293577981
Test set: Average loss: 0.9978 | Accuracy: 0.998 Target: Mn-Cr-12kV eff = 0.9036616812790098
Test set: Average loss: 0.9982 | Accuracy: 0.998 Target: Ti-Ti-9kV eff = 0.8947477468144618
which yields:
Background rate at 70% for Run-2 data:
cd ~/CastData/ExternCode/TimepixAnalysis/Tools/NN_playground
./train_ingrid -f ~/CastData/data/DataRuns2017_Reco.h5 --ε 0.7
Test set: Average loss: 0.9714 | Accuracy: 0.971 Cut value: 5.098100709915161
Test set: Average loss: 0.9714 | Accuracy: 0.971 Total efficiency = 0.6999946049472634
Test set: Average loss: 0.9945 | Accuracy: 0.994 Target: Ag-Ag-6kV eff = 0.7380100214745884
Test set: Average loss: 0.9804 | Accuracy: 0.980 Target: Al-Al-4kV eff = 0.6931624900782402
Test set: Average loss: 0.8990 | Accuracy: 0.899 Target: C-EPIC-0.6kV eff = 0.5882090617195861
Test set: Average loss: 0.9584 | Accuracy: 0.958 Target: Cu-EPIC-0.9kV eff = 0.6312001771479185
Test set: Average loss: 0.9636 | Accuracy: 0.964 Target: Cu-EPIC-2kV eff = 0.6507413509060955
Test set: Average loss: 0.9980 | Accuracy: 0.998 Target: Cu-Ni-15kV eff = 0.7844036697247706
Test set: Average loss: 0.9978 | Accuracy: 0.998 Target: Mn-Cr-12kV eff = 0.7790613718411552
Test set: Average loss: 0.9982 | Accuracy: 0.998 Target: Ti-Ti-9kV eff = 0.7758209882937946
which yields:
[X]
Add background rates for new model using 95%, 90%, 80%[X]
And the outputs as above for the local efficiencies
[ ]
Implement selection of global vs. local target efficiency[ ]
Check the efficiency we get when applying the model to the 55Fe calibration data (need same efficiency!). Cross check our helper program that does this for lnL method[ ]
Check the background rate from the Run-3 data. Is it compatible? Or does it break down?
Practical:
[ ]
Move the model logic over tolikelihood.nim
as a replacement for thefkAggressive
veto[ ]
including the selection of the target efficiency
[X]
Clean up veto system inlikelihood
for better insertion of NN[X]
add lnL as a form of veto
[X]
add vetoes for MLP and ConvNet (in principle)[ ]
move NN code to main ingrid module[ ]
make MLP / ConvNet types accessible inlikelihood
if compiled on cpp backend (and with CUDA?)[ ]
add path to model file[ ]
adjustCutValueInterpolator
to make it work for both lnL as well as NN. Idea is the same!
Questions:
[ ]
Is there still a place for something like an equivalent for the likelihood morphing? In this case based on likely just interpolating the cut values?
1.9.
TODOs from yesterday:
[X]
Add background rates for new model using 95%, 90%, 80%[X]
And the outputs as above for the local efficiencies
[X]
Implement selection of global vs. local target efficiency[X]
Check the efficiency we get when applying the model to the 55Fe calibration data (need same efficiency!). Cross check our helper program that does this for lnL method -> Wroteeffective_eff_55fe.nim
inNN_playground
-> Efficiency in 55Fe data is abysmal! ~40-55 % at 95% ![ ]
Check the background rate from the Run-3 data. Is it compatible? Or does it break down?
Practical:
[ ]
Move the model logic over tolikelihood.nim
as a replacement for thefkAggressive
veto[ ]
including the selection of the target efficiency
[X]
Clean up veto system inlikelihood
for better insertion of NN[X]
add lnL as a form of veto
[X]
add vetoes for MLP and ConvNet (in principle)[ ]
move NN code to main ingrid module[X]
make MLP / ConvNet types accessible inlikelihood
if compiled on cpp backend (and with CUDA?)[X]
add path to model file[X]
adjustCutValueInterpolator
to make it work for both lnL as well as NN. Idea is the same!
Questions:
[ ]
Is there still a place for something like an equivalent for the likelihood morphing? In this case based on likely just interpolating the cut values?
With the refactor of likelihood
we can now do things like disable
the lnL cut itself and only use the vetoes.
NOTE: All outputs below that were placed in /t/testing
can be
found in ./resources/nn_testing_outputs/.
For example look at only using the FADC (with a much harsher cut than usual):
likelihood -f ~/CastData/data/DataRuns2017_Reco.h5 \ --h5out /tmp/testing/test_run2_only_fadc.h5 \ --region crGold \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --fadcveto \ --vetoPercentile 0.75 \ --calibFile ~/CastData/data/CalibrationRuns2017_Reco.h5 plotBackgroundRate /t/testing/test_run2_only_fadc.h5 \ --combName "onlyFadc" \ --combYear 2017 \ --centerChip 3 \ --title "Background rate from CAST data, only FADC veto" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_only_fadc_veto.pdf \ --outpath /t/testing/ --quiet
The plot is here: The issue is there's a couple of runs that have no FADC / in which the FADC was extremely noisy, hence all we really see is the background distribution of those.
But for the more interesting stuff, let's try to create the background rate using the NN veto at 95% efficiency!:
likelihood -f ~/CastData/data/DataRuns2017_Reco.h5 \ --h5out /tmp/testing/test_run2_only_mlp_0.95.h5 \ --region crGold \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --mlp ~/org/resources/mlp_trained_bsz8192_hidden_5000.pt \ --nnSignalEff 0.95 \ --nnCutKind global \ --calibFile ~/CastData/data/CalibrationRuns2017_Reco.h5
plotBackgroundRate /t/testing/test_run2_only_mlp_0.95.h5 \ --combName "onlyMLP" \ --combYear 2017 \ --centerChip 3 \ --title "Background rate from CAST data, only MLP @ 95%" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_only_mlp_0.95.pdf \ --outpath /t/testing/ --quiet
At 90% global efficiency:
likelihood -f ~/CastData/data/DataRuns2017_Reco.h5 \ --h5out /tmp/testing/test_run2_only_mlp_0.9.h5 \ --region crGold \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --mlp ~/org/resources/mlp_trained_bsz8192_hidden_5000.pt \ --nnSignalEff 0.9 \ --nnCutKind global \ --calibFile ~/CastData/data/CalibrationRuns2017_Reco.h5
plotBackgroundRate /t/testing/test_run2_only_mlp_0.9.h5 \ --combName "onlyMLP" \ --combYear 2017 \ --centerChip 3 \ --title "Background rate from CAST data, only MLP @ 90%" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_only_mlp_0.9.pdf \ --outpath /t/testing/ --quiet
At 80% global efficiency:
likelihood -f ~/CastData/data/DataRuns2017_Reco.h5 \ --h5out /tmp/testing/test_run2_only_mlp_0.8.h5 \ --region crGold \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --mlp ~/org/resources/mlp_trained_bsz8192_hidden_5000.pt \ --nnSignalEff 0.8 \ --nnCutKind global \ --calibFile ~/CastData/data/CalibrationRuns2017_Reco.h5
plotBackgroundRate /t/testing/test_run2_only_mlp_0.8.h5 \ --combName "onlyMLP" \ --combYear 2017 \ --centerChip 3 \ --title "Background rate from CAST data, only MLP @ 90%" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_only_mlp_0.8.pdf \ --outpath /t/testing/ --quiet
At 70% global efficiency:
likelihood -f ~/CastData/data/DataRuns2017_Reco.h5 \ --h5out /tmp/testing/test_run2_only_mlp_0.7.h5 \ --region crGold \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --mlp ~/org/resources/mlp_trained_bsz8192_hidden_5000.pt \ --nnSignalEff 0.7 \ --nnCutKind global \ --calibFile ~/CastData/data/CalibrationRuns2017_Reco.h5
plotBackgroundRate /t/testing/test_run2_only_mlp_0.7.h5 \ --combName "onlyMLP" \ --combYear 2017 \ --centerChip 3 \ --title "Background rate from CAST data, only MLP @ 90%" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_only_mlp_0.7.pdf \ --outpath /t/testing/ --quiet
NOTE: Make sure to set neuralNetCutKind
to local
in the config file!
And local 95%:
likelihood -f ~/CastData/data/DataRuns2017_Reco.h5 \ --h5out /tmp/testing/test_run2_only_mlp_local_0.95.h5 \ --region crGold \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --mlp ~/org/resources/mlp_trained_bsz8192_hidden_5000.pt \ --nnSignalEff 0.95 \ --nnCutKind local \ --calibFile ~/CastData/data/CalibrationRuns2017_Reco.h5
plotBackgroundRate /t/testing/test_run2_only_mlp_local_0.95.h5 \ --combName "onlyMLP" \ --combYear 2017 \ --centerChip 3 \ --title "Background rate from CAST data, only MLP @ local 95%" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_only_mlp_local_0.95.pdf \ --outpath /t/testing/ --quiet
And local 80%:
likelihood -f ~/CastData/data/DataRuns2017_Reco.h5 \ --h5out /tmp/testing/test_run2_only_mlp_local_0.8.h5 \ --region crGold \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --mlp ~/org/resources/mlp_trained_bsz8192_hidden_5000.pt \ --nnSignalEff 0.8 \ --nnCutKind local \ --calibFile ~/CastData/data/CalibrationRuns2017_Reco.h5
plotBackgroundRate /t/testing/test_run2_only_mlp_local_0.8.h5 \ --combName "onlyMLP" \ --combYear 2017 \ --centerChip 3 \ --title "Background rate from CAST data, only MLP @ local 80%" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_only_mlp_local_0.8.pdf \ --outpath /t/testing/ --quiet
1.10.
Continue on from yesterday:
[ ]
Implement 55Fe calibration data into the training process. E.g. add about 1000 events per calibration run to the training data as signal target to have a wider distribution of what real X-rays should look like. Hopefully that increases our efficiency![ ]
It seems like only very few events pass the cuts in
readCalibData
(e.g. what we use for effective efficiency check and in mixed data training). Why is that? Especially for escape peak often less than 300 events are valid! So little statistics, really? Looking at spectra, e.g. in~/CastData/ExternCode/TimepixAnalysis/Analysis/ingrid/out/CalibrationRuns2018_Raw_2020-04-28_15-06-54
there is really this little statistics in the escape peak often. (peaks at less than per bin 50!) How do these spectra look without any cuts? Are our cuts rubbish? Quick look:plotData --h5file ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --runType rtCalibration \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --region crSilver \ --ingrid --separateRuns
seems to support that there simply isn't much more statistics available!
[X]
First training with a mixed set of data, using, per run:
- min(500, total) escape peak (after cuts)
- min(500, total) photo peak (after cuts)
- 6000 background
- all CDL
-> all of these are of course shuffled and then split into training and test datasets The resulting model is in: ./resources/nn_devel_mixing/trained_mlp_mixed_data.pt
The generated plots are in: ./Figs/statusAndProgress/neuralNetworks/development/mixing_data/
Looking at these figures we can see mainly that the ROC curve is extremely 'clean', but fitting for the separation seen in the training and validation output distributions.
[ ]
effective efficiencies for 55Fe[ ]
efficiencies of CDL data!
[X]
make loss / accuracy curves log10[ ]
Implement snapshots of the model during training whenever the training and test (or only test) accuracy improves
As discussed in the meeting today (sec. [BROKEN LINK: sec:meetings:17_03_23] in notes), let's rerun all expected limits and add the new two, namely:
[ ]
redo all expected limit calculations with the following new cases:- 0.9 lnL + scinti + FADC@0.98 + line
- 0.8 lnL + scinti + FADC@0.98 + line εcut:
- 1.0, 1.2, 1.4, 1.6
The standard cases (lnL 80 + all veto combinations with different FADC settings):
./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crGold --regions crAll \ --vetoSets "{fkLogL, fkScinti, fkFadc, fkSeptem, fkLineVeto, fkExclusiveLineVeto}" \ --fadcVetoPercentiles 0.9 --fadcVetoPercentiles 0.95 --fadcVetoPercentiles 0.99 \ --out /t/lhood_outputs_adaptive_fadc \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing
The no septem veto + different lnL efficiencies:
[X]
0.9 lnL + scinti + FADC@0.98 + line
./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crGold --regions crAll \ --vetoSets "{+fkLogL, +fkScinti, fkSeptem, fkLineVeto, fkExclusiveLineVeto}" \ --signalEfficiency 0.7 --signalEfficiency 0.9 \ --out /t/lhood_outputs_adaptive_fadc \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing
The older case of changing lnL efficiency with different lnL efficiency
[X]
add a case with less extreme FADC veto
./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crGold --regions crAll \ --signalEfficiency 0.7 --signalEfficiency 0.9 \ --vetoSets "{+fkLogL, +fkScinti, +fkFadc, +fkSeptem, fkLineVeto}" \ --fadcVetoPercentile 0.95 --fadcVetoPercentile 0.99 \ --out /t/lhood_outputs_adaptive_fadc \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing
And finally different eccentricity cutoffs for the line veto:
./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crGold --regions crAll \ --signalEfficiency 0.7 --signalEfficiency 0.8 --signalEfficiency 0.9 \ --vetoSets "{+fkLogL, +fkScinti, +fkFadc, fkLineVeto}" \ --eccentricityCutoff 1.0 --eccentricityCutoff 1.2 --eccentricityCutoff 1.4 --eccentricityCutoff 1.6 \ --out /t/lhood_outputs_adaptive_fadc \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing
The output H5 files will be placed in: ./resources/lhood_limits_automation_with_nn_support
1.11.
With the likelihood
output files generated over night in
resources/lhood_limits_automation_with_nn_support
it's now time to let the limits run.
I noticed something else was missing from these files: I forgot to re
add the actual vetoes in use to the output (because those were written
manually).
[X]
addflags
toLikelihoodContext
to auto serialize them[X]
updatemcmc
limit code to use the new serialized data as veto efficiency and veto usage[X]
rerun all limits with all the different setups.[X]
updaterunLimits
to be smarter about what has been done. In principle we can now quit the limit calculation and it should continue automatically on a restart (with the last file worked on!)
The script we actually ran today. This will be part of the thesis (or a variation thereof).
#!/usr/bin/zsh cd ~/CastData/ExternCode/TimepixAnalysis/Analysis/ ./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crGold --regions crAll \ --vetoSets "{fkLogL, fkScinti, fkFadc, fkSeptem, fkLineVeto, fkExclusiveLineVeto}" \ --fadcVetoPercentiles 0.9 --fadcVetoPercentiles 0.95 --fadcVetoPercentiles 0.99 \ --out /t/lhood_outputs_adaptive_fadc \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing \ --jobs 12 ./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crGold --regions crAll \ --vetoSets "{+fkLogL, +fkScinti, fkSeptem, fkLineVeto, fkExclusiveLineVeto}" \ --signalEfficiency 0.7 --signalEfficiency 0.9 \ --out /t/lhood_outputs_adaptive_fadc \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing \ --jobs 12 ./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crGold --regions crAll \ --signalEfficiency 0.7 --signalEfficiency 0.9 \ --vetoSets "{+fkLogL, +fkScinti, +fkFadc, +fkSeptem, fkLineVeto}" \ --fadcVetoPercentile 0.95 --fadcVetoPercentile 0.99 \ --out /t/lhood_outputs_adaptive_fadc \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing \ --jobs 12 ./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crGold --regions crAll \ --signalEfficiency 0.7 --signalEfficiency 0.8 --signalEfficiency 0.9 \ --vetoSets "{+fkLogL, +fkScinti, +fkFadc, fkLineVeto}" \ --eccentricityCutoff 1.0 --eccentricityCutoff 1.2 --eccentricityCutoff 1.4 --eccentricityCutoff 1.6 \ --out /t/lhood_outputs_adaptive_fadc \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing \ --jobs 12
(all in ./resources/lhood_limits_automation_with_nn_support/)
And currently running:
./runLimits --path ~/org/resources/lhood_limits_automation_with_nn_support --nmc 1000
Train NN:
./train_ingrid ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --ε 0.95 \ --modelOutpath /tmp/trained_mlp_mixed_data.pt
NOTE: The one thing I just realized is: the accuracy we print is of course related to the actual prediction of the network, i.e. which of the two output neurons is the maximum value. So maybe our approach of only looking at one and adjusting based on that is just dumb and the network actually is much better than we think?
The numbers we see as accuracy actually make sense.
Consider predictBackground
output:
Pred set: Average loss: 0.0169 | Accuracy: 0.9956 p inds len 1137 compared to all 260431
The 1137 clusters left after cuts correspond exactly to 99.56% (this is based on using the network's real prediction and not the output + cut value). The question here is: at what signal efficiency is this? From the CDL data it would seem to be at ~99%.
The network we trained today, including checkpoints is here: ./resources/nn_devel_mixing/18_03_23/
[X]
Check what efficiency we get for calibration data instead of background -> Yeah, it is also at over 99% efficiency. So we get a 1e-5 background rate at 99% efficiency. At least that's not too bad.
./resources/lhood_limits_automation_with_nn_support/limits/
with the logL output files in the lhood
folder.
We'll continue on later with the processed.txt
file as a guide there.
1.12.
The expected limits for
resources/lhood_limits_automation_with_nn_support/limits/ are
still running because our processed
continuation check was incorrect
(looking at full path & not actually skipping files!).
Back to the NN: Let's look at the output of the network for both output neurons. Are they really essentially a mirror of one another?
predictAll
in train_ingrid
creates a plot of the different data
kinds (55Fe, CDL, background) and each neurons output prediction.
This yields the following plot by running:
./train_ingrid ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --ε 0.95 \ --modelOutpath ~/org/resources/nn_devel_mixing/18_03_23/trained_mlp_mixed_datacheckpoint_epoch_95000_loss_0.0117_acc_0.9974.pt \ --predict
where we can see the following points:
- the two neurons are almost perfect mirrors, but not exactly
- selecting the
argmax
of the two neurons gives us that neuron, which has a positive value almost certainly, due to the mirror nature around 0. It could be different (e.g. both neurons giving a positive or a negative value), but looking at the data this does not seem to happen (if then very rarely). - a cut value of
0
should reproduce pretty exactly the standard neural network prediction of picking theargmax
Question: Can the usage of both neurons be beneficial given the small but existing differences in the distributions? Not sure how if so.
An earlier checkpoint (the one before the extreme jump in the loss value based on the loss figure; need to regenerate it, but similar to except as log10) yields the following neuron output:
./train_ingrid ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --ε 0.95 \ --modelOutpath ~/org/resources/nn_devel_mixing/18_03_23/trained_mlp_mixed_datacheckpoint_epoch_65000_loss_0.0103_acc_0.9977.pt \ --predict
We can clearly see that at this stage in training the two types of signal data are predicted quite differently! In that sense the latest model is actually much more like what we want, i.e. same prediction for all different kinds of X-rays!
What does the case with the worst loss look like?
./train_ingrid ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --ε 0.95 \ --modelOutpath ~/org/resources/nn_devel_mixing/18_03_23/trained_mlp_mixed_datacheckpoint_epoch_70000_loss_0.9683_acc_0.9977.pt \ --predict
Interestingly essentially the same. But the accuracy is the same as before, only the loss is different. Not sure why that might be.
Training the network again after the charge bug was fixed:
./train_ingrid ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --ε 0.95 \ --modelOutpath /t/nn_training/trained_model_charge_cut_bug_fixed.pt
which are stored here:
./resources/nn_devel_mixing/19_03_23_charge_bug_fixed/
with the generated plots:
./Figs/statusAndProgress/neuralNetworks/development/charge_cut_bug_fixed
Looking at the loss plot, at around epoch 83000 the training data
started to outpace the test data (test didn't get any worse though and
test accuracy improved slightly).
Also the ~all_prediction.pdf
plot showing how CDL and 55Fe data is
predicted is interesting. The CDL data is skewed significantly more
to the right than 55Fe, explaining the again prevalent difference in
55Fe efficiency for a given CDL eff:
./effective_eff_55fe \ -f ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/19_03_23_charge_bug_fixed/trained_model_charge_cut_bug_fixedcheckpoint_epoch_100000_loss_0.0102_acc_0.9978.pt \ --ε 0.95
Run: 83 for target: signal Keeping : 759 of 916 = 0.8286026200873362
Run: 88 for target: signal Keeping : 763 of 911 = 0.8375411635565313
Run: 93 for target: signal Keeping : 640 of 787 = 0.8132147395171537
Run: 96 for target: signal Keeping : 4591 of 5635 = 0.8147293700088731
Run: 102 for target: signal Keeping : 1269 of 1588 = 0.7991183879093199
Run: 108 for target: signal Keeping : 2450 of 3055 = 0.8019639934533551
Run: 110 for target: signal Keeping : 1244 of 1554 = 0.8005148005148005
Run: 116 for target: signal Keeping : 1404 of 1717 = 0.8177052999417589
Run: 118 for target: signal Keeping : 1351 of 1651 = 0.8182919442761962
Run: 120 for target: signal Keeping : 2784 of 3413 = 0.8157046586580721
Run: 122 for target: signal Keeping : 4670 of 5640 = 0.8280141843971631
Run: 126 for target: signal Keeping : 2079 of 2596 = 0.8008474576271186
Run: 128 for target: signal Keeping : 6379 of 7899 = 0.8075705785542474
Run: 145 for target: signal Keeping : 2950 of 3646 = 0.8091058694459682
Run: 147 for target: signal Keeping : 1670 of 2107 = 0.7925961082107261
Run: 149 for target: signal Keeping : 1536 of 1936 = 0.7933884297520661
Run: 151 for target: signal Keeping : 1454 of 1839 = 0.790647090810223
Run: 153 for target: signal Keeping : 1515 of 1908 = 0.7940251572327044
Run: 155 for target: signal Keeping : 1386 of 1777 = 0.7799662352279122
Run: 157 for target: signal Keeping : 1395 of 1817 = 0.7677490368739681
Run: 159 for target: signal Keeping : 2805 of 3634 = 0.7718767198679142
Run: 161 for target: signal Keeping : 2825 of 3632 = 0.7778083700440529
Run: 163 for target: signal Keeping : 1437 of 1841 = 0.7805540467137425
Run: 165 for target: signal Keeping : 3071 of 3881 = 0.7912909044060809
Run: 167 for target: signal Keeping : 1557 of 2008 = 0.775398406374502
Run: 169 for target: signal Keeping : 4644 of 5828 = 0.7968428277282087
Run: 171 for target: signal Keeping : 1561 of 1956 = 0.7980572597137015
Run: 173 for target: signal Keeping : 1468 of 1820 = 0.8065934065934066
Run: 175 for target: signal Keeping : 1602 of 2015 = 0.7950372208436725
Run: 177 for target: signal Keeping : 1557 of 1955 = 0.7964194373401534
Run: 179 for target: signal Keeping : 1301 of 1671 = 0.7785757031717534
Run: 181 for target: signal Keeping : 2685 of 3426 = 0.7837127845884413
Run: 183 for target: signal Keeping : 2821 of 3550 = 0.7946478873239436
Run: 185 for target: signal Keeping : 3063 of 3856 = 0.7943464730290456
Run: 187 for target: signal Keeping : 2891 of 3616 = 0.7995022123893806
This is for a local efficiency. So once again 95% in CDL correspond to about 80% in 55Fe. Not ideal.
Let's try to train a network that also includes the total charge, so it has some idea of the gas gain in the events.
Otherwise we leave the settings as is:
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --ε 0.95 \ --modelOutpath /t/nn_training/trained_model_incl_totalCharge.pt
Interestingly, when including the total charge, the loss of the test data remains lower than of the training set! Models: ./resources/nn_devel_mixing/19_03_23_with_total_charge/ and plots: ./Figs/statusAndProgress/neuralNetworks/development/with_total_charge/ Looking at the total charge, we see the same behavior of CDL and 55Fe data essentially. The background distribution has changed a bit.
We could attempt to change our definition of our loss
function. Currently we in now way enforce that our result should be
close to our target [1, 0]
and [0, 1]
. Using a MSE loss for
example would make sure of that.
Now training with MSE loss. -> Couldn't get anything sensible out of MSE loss. Chatted with BingChat and it couldn't quite help me (different learning rate etc.), but it did suggest to try L1 loss (mean absolute error), which I am running with now.
L1 loss: ./resources/nn_devel_mixing/19_03_23_l1_loss/ ./Figs/statusAndProgress/neuralNetworks/development/l1_loss/ The all prediction plot is interesting. We see the same-ish behavior in this case as in the cross entropy loss. In the training dataset we can even more clearly see two distinct peaks. However, especially the effective efficiencies in the 55Fe data are all over the place:
./effective_eff_55fe \ -f ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/19_03_23_l1_loss/trained_model_incl_totalCharge_l1_losscheckpoint_epoch_100000_loss_0.0157_acc_0.9971.pt \ --ε 0.95
Run: 83 for target: signal Keeping : 781 of 916 = 0.8526200873362445
Run: 88 for target: signal Keeping : 827 of 911 = 0.9077936333699231
Run: 93 for target: signal Keeping : 612 of 787 = 0.7776365946632783
Run: 96 for target: signal Keeping : 4700 of 5635 = 0.8340727595385981
Run: 102 for target: signal Keeping : 1292 of 1588 = 0.8136020151133502
Run: 108 for target: signal Keeping : 2376 of 3055 = 0.7777414075286416
Run: 110 for target: signal Keeping : 1222 of 1554 = 0.7863577863577863
Run: 116 for target: signal Keeping : 1453 of 1717 = 0.8462434478741991
Run: 118 for target: signal Keeping : 1376 of 1651 = 0.8334342822531798
Run: 120 for target: signal Keeping : 2966 of 3413 = 0.8690301787283914
Run: 122 for target: signal Keeping : 5049 of 5640 = 0.8952127659574468
Run: 126 for target: signal Keeping : 2157 of 2596 = 0.8308936825885979
Run: 128 for target: signal Keeping : 6546 of 7899 = 0.8287124952525636
Run: 145 for target: signal Keeping : 2729 of 3646 = 0.7484914975315414
Run: 147 for target: signal Keeping : 1517 of 2107 = 0.7199810156620788
Run: 149 for target: signal Keeping : 1152 of 1936 = 0.5950413223140496
Run: 151 for target: signal Keeping : 1135 of 1839 = 0.6171832517672649
Run: 153 for target: signal Keeping : 1091 of 1908 = 0.5718029350104822
Run: 155 for target: signal Keeping : 974 of 1777 = 0.5481148002250985
Run: 157 for target: signal Keeping : 978 of 1817 = 0.5382498624105668
Run: 159 for target: signal Keeping : 2083 of 3634 = 0.5731975784259769
Run: 161 for target: signal Keeping : 2152 of 3632 = 0.5925110132158591
Run: 163 for target: signal Keeping : 1264 of 1841 = 0.6865833785985878
Run: 165 for target: signal Keeping : 2929 of 3881 = 0.7547023962896161
Run: 167 for target: signal Keeping : 1467 of 2008 = 0.7305776892430279
Run: 169 for target: signal Keeping : 4458 of 5828 = 0.7649279341111874
Run: 171 for target: signal Keeping : 1495 of 1956 = 0.7643149284253579
Run: 173 for target: signal Keeping : 1401 of 1820 = 0.7697802197802198
Run: 175 for target: signal Keeping : 1566 of 2015 = 0.7771712158808933
Run: 177 for target: signal Keeping : 1561 of 1955 = 0.7984654731457801
Run: 179 for target: signal Keeping : 1105 of 1671 = 0.6612806702573309
Run: 181 for target: signal Keeping : 2425 of 3426 = 0.7078225335668418
Run: 183 for target: signal Keeping : 2543 of 3550 = 0.716338028169014
Run: 185 for target: signal Keeping : 3033 of 3856 = 0.7865663900414938
Run: 187 for target: signal Keeping : 2712 of 3616 = 0.75
So definitely worse in that aspect.
Let's try cross entropy again, but with L1 or L2 regularization.
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --ε 0.95 \ --modelOutpath ~/org/resources/nn_devel_mixing/19_03_23_l2_regularization/trained_model_incl_totalCharge_l2_regularization.pt
First attempt with:
SGDOptions.init(0.005).momentum(0.2).weight_decay(0.01)
does not really converge. I guess that is too large.. :) Trying again
with 0.001
. This seems to work better.
Oh, it broke between epoch 10000 and 15000, but better again at 20000
(but worse than before). It stayed on a plateau above the previous
afterwards until the end. Also the distributions of the outputs are
quite different now.
./train_ingrid ~/CastData/data/CalibrationRuns2017_Reco.h5 ~/CastData/data/CalibrationRuns2018_Reco.h5 --back ~/CastData/data/DataRuns2017_Reco.h5 --back ~/CastData/data/DataRuns2018_Reco.h5 --ε 0.95 --modelOutpath ~/org/resources/nn_devel_mixing/19_03_23_l2_regularization/trained_model_incl_totalCharge_l2_regularizationcheckpoint_epoch_100000_loss_0.0261_acc_0.9963.pt --predict --plotPath ~/org/Figs/statusAndProgress/neuralNetworks/development/l2_regularization ./train_ingrid ~/CastData/data/CalibrationRuns2017_Reco.h5 ~/CastData/data/CalibrationRuns2018_Reco.h5 --back ~/CastData/data/DataRuns2017_Reco.h5 --back ~/CastData/data/DataRuns2018_Reco.h5 --ε 0.95 --modelOutpath ~/org/resources/nn_devel_mixing/19_03_23_l2_regularization/trained_model_incl_totalCharge_l2_regularizationcheckpoint_epoch_10000_loss_0.0156_acc_0.9964.pt --predict --plotPath ~/org/Figs/statusAndProgress/neuralNetworks/development/l2_regularization
./resources/nn_devel_mixing/19_03_23_l2_regularization/
./Figs/statusAndProgress/neuralNetworks/development/l2_regularization/
Looking at the prediction of the final checkpoint
(*_final_checkpoint.pdf
) we see that we still have the same kind of
shift in the data. However, after epoch 10000
()
we see a much clearer overlap between the two (but likely also more
background?).
Still interesting, maybe L2 reg is useful if optimized to a good
parameter.
Let's look at the effective efficiencies of this particular checkpoint
and comparing with the very last one:
First the last:
./effective_eff_55fe \ -f ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/19_03_23_l2_regularization/trained_model_incl_totalCharge_l2_regularizationcheckpoint_epoch_100000_loss_0.0261_acc_0.9963.pt \ --ε 0.95
Run: 118 for target: signal Keeping : 1357 of 1651 = 0.8219261053906723
Run: 120 for target: signal Keeping : 2828 of 3413 = 0.8285965426311164
Run: 122 for target: signal Keeping : 4668 of 5640 = 0.8276595744680851
Run: 126 for target: signal Keeping : 2097 of 2596 = 0.8077812018489985
Run: 128 for target: signal Keeping : 6418 of 7899 = 0.8125079123939739
Run: 145 for target: signal Keeping : 2960 of 3646 = 0.811848601206802
Run: 147 for target: signal Keeping : 1731 of 2107 = 0.8215472235405791
Run: 149 for target: signal Keeping : 1588 of 1936 = 0.8202479338842975
Run: 151 for target: signal Keeping : 1482 of 1839 = 0.8058727569331158
Run: 153 for target: signal Keeping : 1565 of 1908 = 0.820230607966457
Run: 155 for target: signal Keeping : 1434 of 1777 = 0.806978052898143
Run: 157 for target: signal Keeping : 1457 of 1817 = 0.8018712162905889
Run: 159 for target: signal Keeping : 2914 of 3634 = 0.8018712162905889
Run: 161 for target: signal Keeping : 2929 of 3632 = 0.8064427312775331
Run: 163 for target: signal Keeping : 1474 of 1841 = 0.8006518196632265
Run: 165 for target: signal Keeping : 3134 of 3881 = 0.8075238340633857
Run: 167 for target: signal Keeping : 1609 of 2008 = 0.8012948207171314
Run: 169 for target: signal Keeping : 4738 of 5828 = 0.8129718599862732
Run: 171 for target: signal Keeping : 1591 of 1956 = 0.8133946830265849
Run: 173 for target: signal Keeping : 1465 of 1820 = 0.804945054945055
Run: 175 for target: signal Keeping : 1650 of 2015 = 0.8188585607940446
Run: 177 for target: signal Keeping : 1576 of 1955 = 0.8061381074168797
Run: 179 for target: signal Keeping : 1339 of 1671 = 0.8013165769000599
Run: 181 for target: signal Keeping : 2740 of 3426 = 0.7997664915353182
Run: 183 for target: signal Keeping : 2856 of 3550 = 0.8045070422535211
Run: 185 for target: signal Keeping : 3146 of 3856 = 0.8158713692946058
Run: 187 for target: signal Keeping : 2962 of 3616 = 0.8191371681415929
Once again in the ballpark of 80% while at 95% for CDL. And for epoch 10,000?
./effective_eff_55fe \ -f ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/19_03_23_l2_regularization/trained_model_incl_totalCharge_l2_regularizationcheckpoint_epoch_10000_loss_0.0156_acc_0.9964.pt \ --ε 0.95
Run: 118 for target: signal Keeping : 1357 of 1651 = 0.8219261053906723
Run: 120 for target: signal Keeping : 2790 of 3413 = 0.8174626428362145
Run: 122 for target: signal Keeping : 4626 of 5640 = 0.8202127659574469
Run: 126 for target: signal Keeping : 2092 of 2596 = 0.8058551617873652
Run: 128 for target: signal Keeping : 6377 of 7899 = 0.8073173819470819
Run: 145 for target: signal Keeping : 2974 of 3646 = 0.8156884256719693
Run: 147 for target: signal Keeping : 1735 of 2107 = 0.8234456573327005
Run: 149 for target: signal Keeping : 1606 of 1936 = 0.8295454545454546
Run: 151 for target: signal Keeping : 1485 of 1839 = 0.8075040783034257
Run: 153 for target: signal Keeping : 1575 of 1908 = 0.8254716981132075
Run: 155 for target: signal Keeping : 1444 of 1777 = 0.8126055149127743
Run: 157 for target: signal Keeping : 1478 of 1817 = 0.8134287286736379
Run: 159 for target: signal Keeping : 2932 of 3634 = 0.8068244358833242
Run: 161 for target: signal Keeping : 2942 of 3632 = 0.8100220264317181
Run: 163 for target: signal Keeping : 1484 of 1841 = 0.8060836501901141
Run: 165 for target: signal Keeping : 3134 of 3881 = 0.8075238340633857
Run: 167 for target: signal Keeping : 1612 of 2008 = 0.8027888446215139
Run: 169 for target: signal Keeping : 4700 of 5828 = 0.8064516129032258
Run: 171 for target: signal Keeping : 1582 of 1956 = 0.8087934560327198
Run: 173 for target: signal Keeping : 1469 of 1820 = 0.8071428571428572
Run: 175 for target: signal Keeping : 1630 of 2015 = 0.8089330024813896
Run: 177 for target: signal Keeping : 1571 of 1955 = 0.8035805626598466
Run: 179 for target: signal Keeping : 1366 of 1671 = 0.817474566128067
Run: 181 for target: signal Keeping : 2734 of 3426 = 0.7980151780502043
Run: 183 for target: signal Keeping : 2858 of 3550 = 0.8050704225352112
Run: 185 for target: signal Keeping : 3122 of 3856 = 0.8096473029045643
Run: 187 for target: signal Keeping : 2937 of 3616 = 0.8122234513274337
Interesting! Despite the much nicer overlap in the prediction at this checkpoint, the end result is not that different. Not quite sure what to make of that.
Next we try Adam, starting with this:
var optimizer = Adam.init( model.parameters(), AdamOptions.init(0.005) )
./resources/nn_devel_mixing/19_03_23_adam_optim/ ./Figs/statusAndProgress/neuralNetworks/development/adam_optim/
-> Outputs are very funny. Extremely wide, need --clampOutput
O(10000) or more. CDL and 55Fe are quite separated though!
Enough for today.
[X]
TryL1and L2 regularization of the network (weight decay parameter)[ ]
Try L1 regularization[ ]
Try Adam optimizer[ ]
Try L2 with a value slightly larger and slightly smaller than0.001
1.12.1. TODOs [/]
[ ]
If we want to include the energy into the NN training at some point we'd have to make sure to use the correct real energy for the CDL data and not theenergyFromCharge
case! -> But currently we don't use the energy at all anyway.[ ]
Using the energy could be a useful studying tool I imagine. Would allow to investigate behavior if e.g. only energy is changed etc.
[ ]
Understand why seemingly nice L2 reg example at checkpoint 10,000 still has such distinction between CDL and 55Fe despite distributions 'promising' difference? Maybe one bin is just too big?
1.12.2. DONE Bug in withLogLFilterCuts
? [/]
I just noticed that in the withLogLFilterCuts
the following line:
chargeCut = data[igTotalCharge][i].float > cuts.minCharge and data[igTotalCharge][i] < cuts.maxCharge
is still present even for the fitByRun
case. The body of the
template is inserted after the data
array is filled. This means
that the cuts are applied to the combined data. That combined data
then is further filtered by this charge
cut. For the fitByRun
case
however, the minCharge
and maxCharge
field of the cuts
variable
will be set to the values seen in the last run!
Therefore the cut wrongly removes many clusters based on the wrong
charge cut in this case!
The effect of this needs to be investigated ASAP. Both what the CDL distributions look like before and after, as well as what this implies for the lnL cut method!
Which tool generated CDL distributions by run?
cdl_spectrum_creation
.
But, cdl_spectrum_creation
uses the readCutCDL
procedure in
cdl_utils
. The heart of it is:
let cutTab = getXrayCleaningCuts() let grp = h5f[(recoDataChipBase(runNumber) & $chip).grp_str] let cut = cutTab[$tfKind] result = cutOnProperties(h5f, grp, cut.cutTo, ("rmsTransverse", cut.minRms, cut.maxRms), ("length", 0.0, cut.maxLength), ("hits", cut.minPix, Inf), ("eccentricity", 0.0, cut.maxEccentricity))
from the h5f.getCdlCutIdxs(runNumber, chip, tfKind)
call, i.e. it
manually only applies the X-ray cleaning cuts! So it only ever looks
at the distributions of those and never actually the full
LogLFilterCuts
equivalent of the above!
So we might have never noticed cutting away too much for each spectrum, ugh.
[X]
We'll do the following: Add a set of plots that show for each ingrid property:
- Raw data
- cut using
readCutCDL
withXrayReferenceCut
withLogLFilterCut
and then compare what we see. -> Instead of trying to implement this into
cdl_spectrum_creation
we wrote a separate small plotting script here: ./../CastData/ExternCode/TimepixAnalysis/Plotting/plotCdl/plotCdlDifferentCuts.nim
./plotCdlDifferentCuts -f ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 -c ~/CastData/data/CDL_2019/calibration-cdl-2018.h5
generates the files found in ./Figs/statusAndProgress/cdlCuts/with_charge_cut_bug/ today (before fixing the charge cut bug).
NOTE: The plots have been updated and now include the cleaning cut case mentioned a paragraph down! Especially look at the following two plots
- Figs/statusAndProgress/cdlCuts/with_charge_cut_bug/Cu-Ni-15kV_totalCharge_histogram_by_different_cut_approaches.pdf
- Figs/statusAndProgress/cdlCuts/with_charge_cut_bug/Cu-Ni-15kV_rmsTransverse_histogram_by_different_cut_approaches.pdf
the total charge plot indicates how much is thrown away comparing LogLCuts & XrayCuts with CDL cuts and the rmsTransverse indicates how much in percentage of the signal is lost comparing the two.
The big question looking at this plot right now though is why the
X-ray reference cut behaves exactly the same way as the LogL cut does!
The 'last cuts' should only be applied to all data in the case of the
LogL cut usage!
-> The reason is that the X-ray reference cut case uses the I think
wrong set of two cuts. The idea should have been to reproduce the same
cuts as the CDL applies! But it's exactly only those cuts that contain
the charge cut and are intended to cut to the main peak of the
spectrum…
I mean I suppose it makes sense from the name, now that I think about
it. We'll add a withXrayCleaningCuts
.
So, with the cleaning cut introduced, we get the behavior we would have expected. The LogL filter and XrayRef cuts lose precisely the peaks of the not last peak.
We'll fix it by not applying the charge cut in the case where we use
fitByRun
.
The new plots are in: Figs/statusAndProgress/cdlCuts/charge_cut_bug_fixed/ and the same plots:
- Figs/statusAndProgress/cdlCuts/charge_cut_bug_fixed/Cu-Ni-15kV_totalCharge_histogram_by_different_cut_approaches.pdf
- Figs/statusAndProgress/cdlCuts/charge_cut_bug_fixed/Cu-Ni-15kV_rmsTransverse_histogram_by_different_cut_approaches.pdf
We can see we now keep the correct information!
This has implications for all the background rates and all the limits to an extent of course.
[X]
Generate
likelihood
output with only lnL cut for Run-2 and Run-3 at 80% and compare with background rate from all likelihood combinations generated yesterday. That should give us an idea if it's necessary to regenerate all outputs and limits again. First need to regenerate the likelihood values in all the data files though.likelihood -f ~/CastData/data/DataRuns2017_Reco.h5 --cdlYear 2018 --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 --computeLogL likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 --cdlYear 2018 --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 --computeLogL likelihood -f ~/CastData/data/CalibrationRuns2017_Reco.h5 --cdlYear 2018 --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 --computeLogL likelihood -f ~/CastData/data/CalibrationRuns2018_Reco.h5 --cdlYear 2018 --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 --computeLogL
and now for the likelihood calls:
./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crGold \ --signalEfficiency 0.8 \ --vetoSets "{fkLogL}" \ --out /t/playground \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --dryRun
and finally compare the background rates:
plotBackgroundRate \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run2_crGold_lnL.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run3_crGold_lnL.h5 \ /t/playground/likelihood_cdl2018_Run2_crGold_signalEff_0.8_lnL.h5 \ /t/playground/likelihood_cdl2018_Run3_crGold_signalEff_0.8_lnL.h5 \ --names "ChargeBug" --names "ChargeBug" \ --names "Fixed" --names "Fixed" \ --centerChip 3 \ --title "Background rate from CAST data, lnL@80, charge cut bug" \ --showNumClusters --showTotalTime \ --topMargin 1.5 --energyDset energyFromCharge \ --outfile background_rate_cast_lnL_80_charge_cut_bug.pdf \ --outpath /t/playground/ \ --quiet
The generated plot is:
As we can see we remove a few clusters, but the difference is absolutely minute. That fortunately means we don't need to rerun all the limits again!
Might still be beneficial for the NN training as the impact on other variables might be bigger.
1.13.
Continuing from yesterday, but before we do that, we need to generate
the new expected limits table using the script in
StatusAndProgress.org
sec. [BROKEN LINK: sec:limit:expected_limits_different_setups_test].
[X]
Generate limits table[ ]
Regenerate all limits once more to have them with the correct eccentricity cut off value in the files -> Should be done, but not priority right now. Our band aid fix relying on the filename is fine for now.[ ]
continue NN training / investigation[ ]
Update systematics due todetermineEffectiveEfficiency
using fixed code (correct energies & data frames) in thesis[ ]
fix that same code forfitByRun
1.13.1. NN training
Let's try to reduce the number of neurons on the hidden layer of the network and see where that gets us in the output distribution.
(back using SGD without L2 reg):
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --ε 0.95 \ --modelOutpath ~/org/resources/nn_devel_mixing/20_03_23_hidden_layer_100neurons/trained_model_hidden_layer_100.pt \ --plotPath ~/org/Figs/statusAndProgress/neuralNetworks/development/hidden_layer_100neurons/
The CDL vs. 55Fe distribution is again slightly different (tested on checkpoint 35000). Btw: also good to know that we can easily run e.g. a prediction of a checkpoint while the training is ongoing. Not a problem whatsoever.
Next test a network that only uses the three variables used for the lnL cut! Back using 500 hidden neurons. Let's try that training while the other one is still running…
If this one shows the same distinction in 55Fe vs CDL data that is actually more damning for our current approach in some sense than anything else. If not however, than we can analyze which variable is the main contributor in giving us that separation in the predictions!
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --ε 0.95 \ --modelOutpath ~/org/resources/nn_devel_mixing/20_03_23_only_lnL_vars/trained_model_only_lnL_vars.pt \ --plotPath ~/org/Figs/statusAndProgress/neuralNetworks/development/only_lnL_vars
It seems like in this case the prediction is actually even in the opposite direction! Now the CDL data is more "background like" than the 55Fe data. ./Figs/statusAndProgress/neuralNetworks/development/only_lnL_vars/all_predictions.pdf What do the effective 55Fe numbers say in this case?
./effective_eff_55fe -f ~/CastData/data/CalibrationRuns2017_Reco.h5 --model ~/org/resources/nn_devel_mixing/20_03_23_only_lnL_vars/trained_model_only_lnL_varscheckpoint_epoch_100000_loss_0.1237_acc_0.9504.pt --ε 0.95
Error: unhandled cpp exception: Could not run 'aten::emptystrided' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, p lease visit https://fburl.com/ptmfixes for possible resolutions. 'aten::emptystrided' is only available for these backends: [CPU, Meta, BackendSelect, Pytho n, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, Au togradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, AutocastCPU, Autocast, Batched, VmapMode, Functionali ze, PythonTLSSnapshot].
CPU: registered at aten/src/ATen/RegisterCPU.cpp:21249 [kernel] Meta: registered at aten/src/ATen/RegisterMeta.cpp:15264 [kernel] BackendSelect: registered at aten/src/ATen/RegisterBackendSelect.cpp:606 [kernel] Python: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:77 [backend fallback] Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback] Conjugate: fallthrough registered at ../aten/src/ATen/ConjugateFallback.cpp:22 [kernel] Negative: fallthrough registered at ../aten/src/ATen/native/NegateFallback.cpp:22 [kernel] ZeroTensor: fallthrough registered at ../aten/src/ATen/ZeroTensorFallback.cpp:90 [kernel] ADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback] AutogradOther: registered at ../torch/csrc/autograd/generated/VariableType2.cpp:12095 [autograd kernel] AutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType2.cpp:12095 [autograd kernel] AutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType2.cpp:12095 [autograd kernel] AutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType2.cpp:12095 [autograd kernel] AutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType2.cpp:12095 [autograd kernel] AutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType2.cpp:12095 [autograd kernel] AutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType2.cpp:12095 [autograd kernel] AutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType2.cpp:12095 [autograd kernel] AutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType2.cpp:12095 [autograd kernel] AutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType2.cpp:12095 [autograd kernel] AutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType2.cpp:12095 [autograd kernel] AutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType2.cpp:12095 [autograd kernel] Tracer: registered at ../torch/csrc/autograd/generated/TraceType2.cpp:12541 [kernel] AutocastCPU: fallthrough registered at ../aten/src/ATen/autocastmode.cpp:462 [backend fallback] Autocast: fallthrough registered at ../aten/src/ATen/autocastmode.cpp:305 [backend fallback] Batched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1059 [backend fallback] VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback] Functionalize: registered at ../aten/src/ATen/FunctionalizeFallbackKernel.cpp:52 [backend fallback] PythonTLSSnapshot: registered at../aten/src/ATen/core/PythonFallbackKernel.cpp:81 [backend fallback]
Uhh, this fails with a weird error… I love the "If you are a
Facebook employee" line!
Oh, never mind, I simply forgot the -d:cuda
flag when compiling,
oops.
Run: 83 for target: signal Keeping : 823 of 916 = 0.898471615720524 Run: 88 for target: signal Keeping : 820 of 911 = 0.9001097694840834 Run: 93 for target: signal Keeping : 692 of 787 = 0.8792884371029225 Run: 96 for target: signal Keeping : 5079 of 5635 = 0.9013309671694765 Run: 102 for target: signal Keeping : 1409 of 1588 = 0.8872795969773299 Run: 108 for target: signal Keeping : 2714 of 3055 = 0.888379705400982 Run: 110 for target: signal Keeping : 1388 of 1554 = 0.8931788931788932 Run: 116 for target: signal Keeping : 1541 of 1717 = 0.8974956319161328 Run: 118 for target: signal Keeping : 1480 of 1651 = 0.8964264082374318 Run: 120 for target: signal Keeping : 3052 of 3413 = 0.8942279519484324 Run: 122 for target: signal Keeping : 4991 of 5640 = 0.8849290780141844 Run: 126 for target: signal Keeping : 2274 of 2596 = 0.8759630200308166 Run: 128 for target: signal Keeping : 6973 of 7899 = 0.8827699708823902 Run: 145 for target: signal Keeping : 3287 of 3646 = 0.9015359297860669 Run: 147 for target: signal Keeping : 1887 of 2107 = 0.8955861414333175 Run: 149 for target: signal Keeping : 1753 of 1936 = 0.9054752066115702 Run: 151 for target: signal Keeping : 1662 of 1839 = 0.9037520391517129 Run: 153 for target: signal Keeping : 1731 of 1908 = 0.9072327044025157
The numbers are hovering around 90% for the 95%
desired. Interesting. And not what we might have expected. I suppose
the different distributions in the CDL output then are related to
different CDL targets. Some are vastly more left than others? What
would the prediction look like if we restrict ourselves to the
MnCr12kV
target?
Modified one line in predictAll
, added this:
.filter(f{`Target` == "Mn-Cr-12kV"})
let's run that on the same model (last checkpoint) and see how it compares in 55Fe vs CDL. Indeed, the CDL data now is more compatible with the 55Fe data (and likely slightly more to the right explaining the 90% for the target 95).
Be that as it may, the difference in the ROC curves of one of our "good" networks and this one is pretty stunning. Where the good ones are almost a right angled triangle, this one is pretty smooth: Figs/statusAndProgress/neuralNetworks/development/only_lnL_vars/roc_curve.pdf
1.13.2. DONE Expected limits table
cd $TPA/Tools/generateExpectedLimitsTable ./generateExpectedLimitsTable --path ~/org/resources/lhood_limits_automation_with_nn_support/limits
εlnL | Scinti | FADC | εFADC | Septem | Line | eccLineCut | εSeptem | εLine | εSeptemLine | Total eff. | Limit no signal | Expected Limit |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0.9 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.7587 | 3.7853e-21 | 7.9443e-23 |
0.9 | true | false | 0.98 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.7742 | 3.6886e-21 | 8.0335e-23 |
0.9 | true | true | 0.98 | false | true | 1.2 | 0.7841 | 0.8794 | 0.7415 | 0.7757 | 3.6079e-21 | 8.1694e-23 |
0.8 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.6744 | 4.0556e-21 | 8.1916e-23 |
0.8 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.6744 | 4.0556e-21 | 8.1916e-23 |
0.9 | true | true | 0.98 | false | true | 1.4 | 0.7841 | 0.8946 | 0.7482 | 0.7891 | 3.5829e-21 | 8.3198e-23 |
0.8 | true | true | 0.98 | false | true | 1.2 | 0.7841 | 0.8794 | 0.7415 | 0.6895 | 3.9764e-21 | 8.3545e-23 |
0.8 | true | true | 0.9 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.6193 | 4.4551e-21 | 8.4936e-23 |
0.9 | true | true | 0.98 | false | true | 1.6 | 0.7841 | 0.9076 | 0.754 | 0.8005 | 3.6208e-21 | 8.5169e-23 |
0.8 | true | true | 0.98 | false | true | 1.4 | 0.7841 | 0.8946 | 0.7482 | 0.7014 | 3.9491e-21 | 8.6022e-23 |
0.8 | true | true | 0.98 | false | true | 1.6 | 0.7841 | 0.9076 | 0.754 | 0.7115 | 3.9686e-21 | 8.6462e-23 |
0.9 | true | false | 0.98 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.6593 | 4.2012e-21 | 8.6684e-23 |
0.7 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5901 | 4.7365e-21 | 8.67e-23 |
0.9 | true | true | 0.98 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.6461 | 4.3995e-21 | 8.6766e-23 |
0.7 | true | false | 0.98 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.6021 | 4.7491e-21 | 8.7482e-23 |
0.8 | true | true | 0.98 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5743 | 4.9249e-21 | 8.7699e-23 |
0.8 | true | true | 0.98 | false | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.784 | 3.6101e-21 | 8.8059e-23 |
0.8 | true | true | 0.8 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5505 | 5.1433e-21 | 8.855e-23 |
0.7 | true | true | 0.98 | false | true | 1.2 | 0.7841 | 0.8794 | 0.7415 | 0.6033 | 4.4939e-21 | 8.8649e-23 |
0.8 | true | true | 0.98 | true | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.6147 | 4.5808e-21 | 8.8894e-23 |
0.9 | true | false | 0.98 | true | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.7057 | 3.9383e-21 | 8.9504e-23 |
0.7 | true | true | 0.98 | false | true | 1.4 | 0.7841 | 0.8946 | 0.7482 | 0.6137 | 4.5694e-21 | 8.9715e-23 |
0.8 | true | true | 0.9 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5274 | 5.3406e-21 | 8.9906e-23 |
0.9 | true | true | 0.9 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5933 | 4.854e-21 | 9e-23 |
0.8 | false | false | 0.98 | false | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.8 | 3.5128e-21 | 9.0456e-23 |
0.8 | true | false | 0.98 | false | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.8 | 3.5573e-21 | 9.0594e-23 |
0.7 | true | true | 0.98 | false | true | 1.6 | 0.7841 | 0.9076 | 0.754 | 0.6226 | 4.5968e-21 | 9.0843e-23 |
0.7 | true | true | 0.98 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5025 | 5.627e-21 | 9.1029e-23 |
0.8 | true | true | 0.9 | false | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.72 | 3.8694e-21 | 9.1117e-23 |
0.8 | true | true | 0.9 | true | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5646 | 4.909e-21 | 9.2119e-23 |
0.7 | true | false | 0.98 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5128 | 5.5669e-21 | 9.3016e-23 |
0.7 | true | false | 0.98 | true | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5489 | 5.3018e-21 | 9.3255e-23 |
0.7 | true | true | 0.9 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.4615 | 6.1471e-21 | 9.4509e-23 |
0.8 | true | true | 0.8 | false | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.64 | 4.5472e-21 | 9.5113e-23 |
0.8 | true | true | 0.8 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.4688 | 5.8579e-21 | 9.5468e-23 |
0.8 | true | true | 0.8 | true | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5018 | 5.6441e-21 | 9.5653e-23 |
1.14.
From yesterday open TODOs:
[ ]
Regenerate all limits once more to have them with the correct eccentricity cut off value in the files -> Should be done, but not priority right now. Our band aid fix relying on the filename is fine for now.[ ]
continue NN training / investigation[ ]
Update systematics due todetermineEffectiveEfficiency
using fixed code (correct energies & data frames) in thesis
[ ]
fix that same code forfitByRun
Additional:
[X]
look at prediction of our best trained network (and maybe the lnL variable one?) for all the different CDL datasets separately. Maybe a ridgeline plot of the different "sets", i.e. background, 55Fe photo, 55Fe escape, CDL sets[ ]
Do the same thing with the Run-2 and Run-3 calibration / background data split?[ ]
Do the same thing, but using thelikelihood
distributions for each instead of the NN predictions!
[ ]
Investigate whether effective efficiency (from tool) is correlated to mean gas gain of each calibration run. create a plot of the effective efficiency vs the mean gas gain of each run, per photo & escape type -> If this is strongly correlated it means we understand where the fluctuation comes from! If true, then can look at CDL data as well and check if this explains the variation.
1.14.1. Structured information about MLP layout
Instead of having to recompile the code each time to make changes to
the layout, I now made it all run time configurable using the
MLPDesc
object. It stores the model and plot path, the number of
input neurons, hidden neurons and which datasets were used.
In order to make the 'old' models work a --writeMLPDesc
option was
added.
For the with_total_charge
:
./train_ingrid \ --modelOutpath ~/org/resources/nn_devel_mixing/19_03_23_with_total_charge/trained_model_incl_totalChargecheckpoint_epoch_100000_loss_0.0102_acc_0.9976.pt \ --plotPath ~/org/Figs/statusAndProgress/neuralNetworks/development/with_total_charge \ --numHidden 500 \ --writeMLPDesc
For the with_total_charge
:
./train_ingrid \ --modelOutpath ~/org/resources/nn_devel_mixing/18_03_23/trained_mlp_mixed_data.pt \ --plotPath ~/org/Figs/statusAndProgress/neuralNetworks/development/mixing_data/ \ --numHidden 500 \ --datasets igEccentricity \ --datasets igSkewnessLongitudinal \ --datasets igSkewnessTransverse \ --datasets igKurtosisLongitudinal \ --datasets igKurtosisTransverse \ --datasets igLength \ --datasets igWidth \ --datasets igRmsLongitudinal \ --datasets igRmsTransverse \ --datasets igLengthDivRmsTrans \ --datasets igRotationAngle \ --datasets igFractionInTransverseRms \ --writeMLPDesc
For the charge_bug_fixed
:
./train_ingrid \ --modelOutpath ~/org/resources/nn_devel_mixing/19_03_23_charge_bug_fixed/trained_model_charge_cut_bug_fixed.pt \ --plotPath ~/org/Figs/statusAndProgress/neuralNetworks/development/charge_cut_bug_fixed/ \ --numHidden 500 \ --writeMLPDesc
For the l1_loss
:
./train_ingrid \ --modelOutpath ~/org/resources/nn_devel_mixing/19_03_23_l1_loss/trained_model_incl_totalCharge_l1_loss.pt \ --plotPath ~/org/Figs/statusAndProgress/neuralNetworks/development/l1_loss/ \ --numHidden 500 \ --writeMLPDesc
For the l2_regularization
:
./train_ingrid \ --modelOutpath ~/org/resources/nn_devel_mixing/19_03_23_l2_regularization/trained_model_incl_totalCharge_l2_regularization.pt \ --plotPath ~/org/Figs/statusAndProgress/neuralNetworks/development/l2_regularization/ \ --numHidden 500 \ --writeMLPDesc
For the hidden_layer_100neurons
:
./train_ingrid \ --modelOutpath ~/org/resources/nn_devel_mixing/20_03_23_hidden_layer_100neurons/trained_model_hidden_layer_100.pt \ --plotPath ~/org/Figs/statusAndProgress/neuralNetworks/development/hidden_layer_100neurons/ \ --numHidden 100 \ --writeMLPDesc
For the only_lnL_vars
:
./train_ingrid \ --modelOutpath ~/org/resources/nn_devel_mixing/20_03_23_only_lnL_vars/trained_model_only_lnL_vars.pt \ --plotPath ~/org/Figs/statusAndProgress/neuralNetworks/development/only_lnL_vars/ \ --numHidden 500 \ --datasets igEccentricity \ --datasets igLengthDivRmsTrans \ --datasets igFractionInTransverseRms \ --writeMLPDesc
In the future this will likely also include the used optimizer, learning rate etc.
1.14.2. Prediction by target/filter
To do this I added an additional plot that also generates a ridgeline
in the predictAll
case.
./train_ingrid ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --ε 0.95 \ --modelOutpath ~/org/resources/nn_devel_mixing/19_03_23_with_total_charge/trained_model_incl_totalChargecheckpoint_epoch_100000_loss_0.0102_acc_0.9976.pt --plotPath ~/org/Figs/statusAndProgress/neuralNetworks/development/with_total_charge \ --predict
And for the network with 2500 hidden neurons:
./train_ingrid ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/21_03_23_hidden_2500neurons/trained_model_hidden_2500neurons.pt \ --predict
I suppose the best thing to do is to use a scaling transformation similar to what Cristina does. Transform CAST data into CDL data by a scaling factor and then transform other CDL energies back into CAST energies.
1.14.3. Train MLP with 2500 hidden neurons [/]
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/21_03_23_hidden_2500neurons/trained_model_hidden_2500neurons.pt \ --plotPath ~/org/Figs/statusAndProgress/neuralNetworks/development/hidden_500neurons/ \ --numHidden 2500
This one is damn good! Especially considering that test is better than train essentially the entire time up to 100,000 epochs!
[ ]
Maybe try even larger?
1.15.
Continue with the jobs from yesterday:
Additional:
[X]
look at prediction of our best trained network (and maybe the lnL variable one?) for all the different CDL datasets separately. Maybe a ridgeline plot of the different "sets", i.e. background, 55Fe photo, 55Fe escape, CDL sets
[ ]
Do the same thing with the Run-2 and Run-3 calibration / background data split?[X]
Do the same thing, but using thelikelihood
distributions for each instead of the NN predictions![X]
Investigate whether effective efficiency (from tool) is correlated to mean gas gain of each calibration run. create a plot of the effective efficiency vs the mean gas gain of each run, per photo & escape type -> If this is strongly correlated it means we understand where the fluctuation comes from! If true, then can look at CDL data as well and check if this explains the variation.
That is: implement the lnL variant into the 'prediction' ridge line plots. And potentially look at the Run-2 vs Run-3 predictions.
[ ]
Look at the background rate of the 90% lnL cut variant. How much background do we have in that case? How does it compare to 99% accuracy MLP prediction?[ ]
maybe try even larger MLP?
As a bonus:
[ ]
look at thehidden_2500neuron
network for the background rate[ ]
try to use the neural network for a limit calculation it its "natural" prediction. i.e close to 99% accuracy! That should give us quite the amazing signal (but of course decent background!). Still, as alternative combined with line veto and/or FADC could be very competitive![X]
Make notes about ROC curve plots[X]
Next up: -> Look at effective efficiency again and how it varies -> Implement CAST ⇔ CDL transformation for cut values
1.15.1. Notes
- old ROC curves often filtered out the lnL = Inf cases for the lnL
method! (not everywhere, but in
likelihood.nim
for example!) - Apparently there are only 418 events < 0.4 keV in the whole background dataset. ROC curves for lnL at lowest target are very rough for that reason. Why weren't they rough before though?
1.15.2. Comparison of the MLP predictions & lnL distributions for each 'type'
This now also produces a plot
all_predictions_ridgeline_by_type_lnL.pdf
which is the equivalent of
the MLP prediction ridgeline, but using the likelihood distributions:
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/21_03_23_hidden_2500neurons/trained_model_hidden_2500neurons.pt \ --predict
Running the prediction now also produces the distributions for the likelihood data as well as ROC curves for both. Note that the ROC curves contain both CAST and CDL data for each target. As such they are a bit too 'good' for CAST and a bit too 'bad' for the CDL. In case of the LnL data they match better, because the likelihood distribution matches better between CAST and CDL.
See the likelihood distributions: Note: All likelihood data at 50 and above has been cut off, as otherwise the peak at 50 dominates the background data such that we don't see the tail. Keep that in mind, the background contribution that is in the range of the X-ray data is a very small fraction!
Compare that with the MLP output of this network (2500 hidden neurons): Figs/statusAndProgress/neuralNetworks/development/hidden_2500neurons/all_predictions_by_type_ridgeline_mlp.pdf
First of all we see that the MLP distributions are much better defined and not as wide (but keep in mind only about half the width in this plot than the other). Pay close attention to the CAST photo peak ('photo') and compare it with the Mn-Cr-12kV target. It should in theory be the same distribution, but it the CAST distribution is shifted slightly to the left! This is precisely the reason why the effective efficiency for the CAST data is always lower than expected (based on CDL that is).
Interestingly even in the lnL case these two distributions are not identical! Their mean is very similar, but the shape differs a bit.
Regarding the ROC curves: Old ROC curves often filtered out the lnL = Inf cases for the lnL method! Therefore, they appeared even worse than they actually are. If you include all data and only look at the mean (i.e. all data at the same time) it is not actually that bad! Which makes sense because by itself the lnL veto is already pretty powerful after all.
The ROC curve for all data MLP vs. LnL: Look at the y scale! 0.96 is the minimum! So LnL really does a good job. It's just that the MLP is even significantly better!
Now for the ROC curve split by different targets:
The first obvious thing is how discrete the low energy cases look! The
C-EPIC-0.6kV case in particular is super rugged. Why is that? And why
was that not the case in the past when we looked at the ROC curves for
different targets?
At the moment I'm not entirely sure, but my assumption is that we
(accidentally?) used all background data when computing the
efficiencies for each target, but only the X-rays corresponding to
each CDL dataset (note that in the past we never had any CAST 55Fe
data in there either).
As it turns out though at energies below 0.4 keV (lowest bin) in the
whole background dataset, there are only about ~400 clusters!
(checked using verbose
option in the targetSpecificRoc
proc).
So this is all very interesting. And it reassures us that using such an MLP is definitely a very interesting avenue. But in order to use it we need to understand the differences in the output distributions for the 5.9 keV data in each of the datasets. One obvious difference between CDL and CAST data is, as we very well know, the temperature drifts that cause gas gain drifts. Therefore next we look at the behavior of the effective efficiency for the data in relation to the gas gain in each run.
1.15.3. Effective efficiency of MLP veto and gas gain dependence
We added reading of the gas gain information and plotting it against the effective efficiencies for each run into ./../CastData/ExternCode/TimepixAnalysis/Tools/NN_playground/effective_eff_55fe.nim
In order to look at all data we added the ability to hand multiple input files and also hand the CDL data file so that we can compare that too.
./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/21_03_23_hidden_2500neurons/trained_model_hidden_2500neurons.pt \ --ε 0.95 \ --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5
which now generates the plots in:
Where the first is just the effective efficiency of the CDL and CAST data (split by the target energy, 3.0 ~ escape peak and 5.9 ~ photo peak). It may look a bit confusing at first, but the green is simply the normalized gain and the purple is the effective efficiency if cut at a 95% value based on the local energy cuts using CDL data.
The green points hide a set of green crosses that are simply not visible because they overlap exactly the green dots (same run numbers for escape and photo peak, same for CDL data!). In the right part at higher run numbers is the CDL dataset (all datasets contain some events around 5.9 keV, most very few, same for 3.0 keV data). Everything is switched around, because the efficiency there is close to the target 95%, but in relative terms the gas gain is much lower.
Staring at this a bit longer indeed seems to indicate that there is a correlation between gas gain and effective efficiency!
This gets more extreme when considering the second plot, which maps the gas gain against the effective efficiency directly in a scatter plot. The left pane shows all data around the 5.9 keV data and the right around the 3.0 keV data. In both panes there is a collection of points in the 'bottom right' and one in the 'top left'. The bottom right contains high gain data at low effective efficiencies, this is the CAST data. The top left is the inverse, high effective efficiencies at low gain. The CDL data.
As we can see especially clearly in the 5.9 keV data, there is a very strong linear correlation between the effective efficiency and the gas gain! The two blobs visible in the CAST data at 5.9 keV correspond to the Run-2 data (the darker points) and Run-3 data (the brighter points). While they differ they seem to follow generally a very similar slope.
This motivates well to use a linear interpolation based on fits found for the CAST data in Run-2 and Run-3, which then is used together with the target efficiency and cut value at that efficiency in the CDL data to correct the cut value for each efficiency!
1.16.
From yesterday:
[ ]
Look at the background rate of the 90% lnL cut variant. How much background do we have in that case? How does it compare to 99% accuracy MLP prediction?[ ]
maybe try even larger MLP?As a bonus:
[ ]
look at thehidden_2500neuron
network for the background rate[ ]
try to use the neural network for a limit calculation it its "natural" prediction. i.e close to 99% accuracy! That should give us quite the amazing signal (but of course decent background!). Still, as alternative combined with line veto and/or FADC could be very competitive!
And in addition:
[ ]
Implement a fit that takes the effective efficiency and gas gain correlation into account and try it to correct for the efficiencies at CAST![ ]
Look at how the distributions change between different CDL runs with different gas gains.
1.17.
First look into the energyFromCharge
for the CDL data and see if it
changes the cut values.
Important thought: ~3 keV escape events are not equivalent to 3 keV X-rays! escape events are effectively 5.9 keV photons that only deposit 3 keV! Real 3 keV X-rays have a much longer absorption length. That explains why 55Fe 3 keV data is shifted to a lower cut value, but CDL 3 keV data to a higher cut value when compared to the 5.9 keV data in each!
So our prediction of the cut value for the escape events via the slope of the 5.9 keV data is therefore too large, because the real events look "less" like X-rays to the network.
Generate two sets of fake events:
[ ]
Events of the same energy, but at an effectively different diffusion length, by taking transverse diffusion and the distance and 'pulling' all electrons of an event towards the center of the cluster -> e.g. generate 'real' 3 keV events from the escape peak 3 keV events[ ]
Events of an artificially lower Timepix threshold of same energy. Look at how many electrons calibration set 1 has compared to 2. Then throw away this many electrons, biased by those of the lowest charges (or inversely fix a threshold in electrons and remove all pixels below and see where we end up). Problem is that total number of recorded electrons (i.e. ToT value) itself also depends on the threshold.[ ]
(potential) a third set could just be looking at fake lower energy events generated in the way that we already do it in the lnL effective efficiency code!
These can then all be used to evaluate the MLP with.
[ ]
Investigate fake events!
1.18.
Continuing from yesterday… Fake events and other stuff..
./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/21_03_23_hidden_2500neurons/trained_model_hidden_2500neurons.pt \ --ε 0.95 \ --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit --plotDatasets
[ ]
Why is skewness of all CAST data (incl. fake data!) slightly positive? It should be centered around 0, no?[ ]
rmsTransverse, length and width are as expected: fake data and escape peak data is larger than real 3 keV data! Due to different absorption lengths
1.19.
And more continue, continue, continue…!
[X]
First start with reordering the ridgeline plots according to energy[ ]
Then implement two different other fake data generation
Generation of data at different diffusion: Change the effective diffusion of the drawn event.
- get the diffusion (transverse) coefficient
σT = getDiffusion()
using the existing energy and target energies compute the required distance we 'move' the cluster from and to. That is: Assuming we have a diffusion equivalent to 3cm (conversion at cathode) and want to 'move' that to a diffusion of 2 cm (conversion 1cm away from cathode). We can compute the transverse diffusion by
σ_T * √x cm (x ∈ [2, 3])
Each of the resulting numbers is the standard deviation of a normal distribution around the center position of the cluster![X]
Verify how this relates to (see below) At 3cm the standard deviation is σ = √(6 D t) (3 dim)
With the distributions we expect we now have a few options to generate new events
- simplest and deterministic: push all electrons to the equivalent value of the PDF (longer distance: shallower PDF. Find P(xi) = P'(xi') and move each xi to xi'.
- draw from the P' distribution for each pixel. The resulting x' is the location in distance from existing cluster center to place the pixel at.
- We could maybe somehow generate a 'local' PDF for each pixel (based on how far away each already is) and drawing from that. So a mix of 1 and 2?
For now let's go with 2. Simpler to implement as we don't need to find an equivalent point on the PDF (which would be doable using
lowerBound
)- define the a gaussian with mean of resulting diffusion around that distance (what sigma does it have?) -> Or: define gaussian of the transverse diffusion coefficient and simply multiply!
- for each pixel sample it and move each pixel the resulting distance towards the center / away from the center (depending)
1.19.1. About diffusion confusion
-> Normal distribution describes position! See also: file:///home/basti/org/Papers/gas_physics/randomwalkBerg_diffusion.pdf <x²> = 2 D t ( 1 dimension ) <x²> = 4 D t ( 2 dimensions ) <x²> = 6 D t ( 3 dimensions ) Also look into Sauli book again (p. 82 eq. (4.5) and eq. (4.6)). Also: file:///home/basti/org/Papers/Hilke-Riegler2020_Chapter_GaseousDetectors.pdf page 15 sec. 4.2.2.2 The latter mentions on page 15 that there is a distinction between: D = diffusion coefficient for which σ = √(2 D t) (1 dim) is valid and D* = diffusion constant for which: σ = D* √(z) is valid!
From PyBoltz source code in Boltz.pyx
self.TransverseDiffusion1 = sqrt(2.0 * self.TransverseDiffusion / self.VelocityZ) * 10000.0
which proves the distinction in the paper: √(2 D t) = D* √x ⇔ D* = √(2 D t) / √x = √(2 D t / x) = √(2 D / v) (with x = v t)
Check this with script:
import math let D = 4694.9611 * 1e-6 # cm² / s to cm² / μs let Dp = 644.22619 # μm / √cm let v = 22.6248 / 10.0 # mm/μs to cm/μs echo sqrt(4.0 * D / v) * 10_000.0 # cm to μm
Uhhhhh…. Doesn't make any sense :(
1.20.
And continue working on the fake event generation…!
[X]
Adjusting the diffusion down to low values (e.g. 400) does not move thefractionInTransverseRms
to lower values![X]
ImplementlogL
andtracking
support intorunAnalysisChain
to really make it do (almost) everything
1.20.1. DONE Figure out why skewness has a systematic bias
About the skewness being non zero: I just noticed that the transverse skewness is always slightly positive, but at the same time the longitudinal skewness is slightly negative by more or less the same amount! Why is that? It's surely some bias in our calculation that has this effect?
About the skewness discussion see: ./LLM_discussions/BingChat/skewness_of_clusters/ the images for the discussion and the code snippets for the generated code.
Based on that I think it is more or less safe that at least algorithmically our approach should not yield any skewed data. However, why does our fake data still show that? Let's try rotating each point by a random φ and see if it still is the case.
[X]
I applied this, using a flat rotation angle for the data and the problem persisted.
After this I looked into the calculation of the geometry and wanted to play around with it. But immediately I found the issue.. I was being too overzealous in a recent bug fix. The following introduced the problem:
https://github.com/Vindaar/TimepixAnalysis/commit/8d4813d405bf3be6f2e98ef32fe0b1f178cdca01
Here we did not actually define any "axis" for the data, but instead simply took the larger value for each variable as the longitudinal one. That's rubbish of course!
Define the long axis (do it based on length & width!), but then stick to that.
Implemented and indeed has the 'desired' result. We now get 0 balanced skewness!
1.20.2. TODO Rerun the CAST reconstruction
Due to the skewness (and related) bug we need to regenerate all the data.
In addition we can already look at how the fake data is being handled by the MLP as a preview. -> Ok, it seems like our new fake events are considered "more" signal like (higher cut value) than the real data.
Before recalc, let's check if the other calibration file has the same skewness offset. Maybe it was generated before we introduced the bug? -> Yeah, also seems to have it already.
Some figures of the state as is right now (with skewness bug in real CAST data) can be found here:
./runAnalysisChain -i ~/CastData/data \ --outpath ~/CastData/data \ --years 2017 --years 2018 \ --calib --back \ --reco --logL --tracking
Finished running, but the tracking wasn't added yet due to the wrong path to the log files!
1.20.3. Important realization about fraction in transverse RMS and normal distribution
I just realized that the fraction in transverse RMS is strongly connected to the probability density within a 1σ region around a bivariate normal distribution! https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Geometric_interpretation
Dimensionality | Probability |
---|---|
1 | 0.6827 |
2 | 0.3935 |
3 | 0.1987 |
4 | 0.0902 |
5 | 0.0374 |
6 | 0.0144 |
7 | 0.0052 |
8 | 0.0018 |
9 | 0.0006 |
10 | 0.0002 |
Our fraction for the fake data is actually closer to the 40% than the real 5.9 keV data! I suppose a difference is visible in the first place, due to us always looking at the shorter axis. The actual standard deviation of our cluster is the average between the transverse and the longitudinal RMS after all! So we expect to capture less than the expected 39.35%!
1.21.
[ ]
Rerun the tracking log info![ ]
re evaluate the fake datasets & efficiencies in general using the non skewed data!
[X]
Understand weird filtering leaving some unwanted events in there -> Ohhh, the reason is that we filter on event numbers only, and not the actual properties of the clusters! Because one event can have multiple clusters. One of them will be passing, but the other likely not.
Ok, finally done with all the hick hack of coming up with working fake event generation etc etc.
As it seems right now:
Fake 5.9 keV data using correct diffusion gives 'identical' results in terms of efficiency than real CAST data. I.e. using the gain fit is correct and useful.
For the 3.0 keV case it is not as simple. The fake data is considered 'more' X-ray like (larger cut values), but quite clearly they don't fit onto the same slope as the 5.9 keV data!
What therefore might be a reasonable option:
- Generate X-ray data for all lines below 5.9 keV
- Use the generated 'runs' to fit the gas gain curve for each dataset
- Use that gas gain curve for each energy range. Lower energy ranges are likely to have somewhat shallower gas gain dependencies? Or maybe it's events with shorter absorption length. We'll have to test.
[ ]
Include one other energy, e.g. 930 eV, due to the very low absorption length. See how that behaves.
[ ]
Generalize the effective efficiency code to also include other CDL lines
[ ]
Rerun effective eff code and place efficiencies and plots somewhere[ ]
Rerun old skewness model with--predict
option intrain_ingrid
[X]
Train a new MLP with same parameters as 2500 hidden neuron model, but using the corrected skewness data!
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/28_03_23_hidden_2500_fixed_skew/trained_model_hidden_2500_fixed_skew.pt \ --plotPath ~/org/Figs/statusAndProgress/neuralNetworks/development/hidden_2500neurons_fixed_skew/ \ --numHidden 2500
-> This was done, but we haven't looked at it yet!
[ ]
In https://www.youtube.com/watch?v=kCc8FmEb1nY Andrej mentions 3e-4 is a good learning rate for AdamW. We've only tried Adam. Let's try AdamW as well.
1.21.1. DONE Fix the memory corruption bug
I think we managed to fix the memory corruption bug that plagued
us. The code that now includes also a 1 keV data line (fake and CDL)
crashed essentially every single time. In cppstl
we put back the
original destructor code (i.e. that does nothing in Nim land) and
modified the NN code such that it compiles:
Instead of relying on emitTypes
which, as we know, caused issues
with the shared_ptr
file that is generated not knowing about
MLPImpl
. In order to get anything to work we tried multiple
different things, but in the end the sanest solution seems to be to
write an actual C++ header file for the model definition and then
wrap that using the header
pragma. So the code now looks as
follows:
type MLPImpl* {.pure, header: "mlp_impl.hpp", importcpp: "MLPImpl".} = object of Module hidden*: Linear classifier*: Linear MLP* = CppSharedPtr[MLPImpl] proc init*(T: type MLP): MLP = result = make_shared(MLPImpl) result.hidden = result.register_module("hidden_module", init(Linear, 13, 500)) result.classifier = result.register_module("classifier_module", init(Linear, 500, 2))
with the header file:
#include "/home/basti/CastData/ExternCode/flambeau/vendor/libtorch/include/torch/csrc/api/include/torch/torch.h" struct MLPImpl: public torch::nn::Module { torch::nn::Linear hidden{nullptr}; torch::nn::Linear classifier{nullptr}; }; typedef std::shared_ptr<MLPImpl> MLP;
(obviously the torch path should not be hardcoded). When compiling
this it again generates a .cpp
file for the smartptrs
Nim
module, but now it looks as follows:
#include "nimbase.h" #include <memory> #include "mlp_impl.hpp" #include "/home/basti/CastData/ExternCode/flambeau/vendor/libtorch/include/torch/csrc/api/include/torch/torch.h" #undef LANGUAGE_C #undef MIPSEB #undef MIPSEL #undef PPC #undef R3000 #undef R4000 #undef i386 #undef linux #undef mips #undef near #undef far #undef powerpc #undef unix #define nimfr_(x, y) #define nimln_(x, y) typedef std::shared_ptr<MLPImpl> TY__oV7GoY52IhMupsxgwx3HYQ; N_LIB_PRIVATE N_NIMCALL(void, eqdestroy___nn95predict_28445)(TY__oV7GoY52IhMupsxgwx3HYQ& dst__cnkLD5UfZbclV0XFs9bD47w) { }
so it contains the include required for the type definition, which
makes it all work without any memory corruption now I believe!
Note: it might be a good idea to change the current Flambeau code
to not use emitTypes
but instead to write a header file in the
same form as above (having to include the Torch path!) and then use
the header pragma in the same way I do. This should be pretty simple
to do I believe and it would automate it.
1.22.
We'll continue from yesterday:
[X]
Rerun the tracking log info![ ]
re evaluate the fake datasets & efficiencies in general using the non skewed data![X]
Include one other energy, e.g. 930 eV, due to the very low absorption length. See how that behaves.[ ]
Generalize the effective efficiency code to also include other CDL lines[ ]
Rerun effective eff code and place efficiencies and plots somewhere[ ]
Rerun old skewness model with--predict
option intrain_ingrid
[X]
In https://www.youtube.com/watch?v=kCc8FmEb1nY Andrej mentions 3e-4 is a good learning rate for AdamW. We've only tried Adam. Let's try AdamW as well.
New TODOs for today:
[ ]
Generate a plot of the cut values that contains all types of data in one with color being the data type
1.22.1. DONE Rerun tracking log info
We only need to rerun the tracking log info, so we can just do:
./runAnalysisChain \ -i ~/CastData/data \ --outpath ~/CastData/data \ --years 2017 --years 2018 \ --back --tracking
1.22.2. Re-evaluate fake & efficiency data using non skewed data
We re ran the code yesterday and today again
1.22.3. STARTED Train MLP with AdamW
Using the idea from Andrej, let's first train AdamW with a learning rate of 1e-3 and then with 3e-4.
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/29_03_23_adamW_2500_1e-3/trained_model_adamW_2500_1e-3.pt \ --plotPath ~/org/Figs/statusAndProgress/neuralNetworks/development/adamW_2500_1e-3/ \ --numHidden 2500 \ --learningRate 1e-3
This one seems to achieve:
Train set: Average loss: 0.0002 | Accuracy: 1.000 Test set: Average loss: 0.0132 | Accuracy: 0.9983 Epoch is:15050
and from here nothing is changing anymore. I guess we're pretty much approaching the best possible separation. The training data must be overtrained already after all, with an accuracy at 1. :O
And the second model with 3e-4:
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/29_03_23_adamW_2500_3e-4/trained_model_adamW_2500_3e-4.pt \ --plotPath ~/org/Figs/statusAndProgress/neuralNetworks/development/adamW_2500_3e-4/ \ --numHidden 2500 \ --learningRate 3e-4
1.22.4. What plots & numbers to generate
- Verification of the fake data generation
[ ]
ridge line plots comparing real data (as we have it) with- fake data using pure remove pixels
- fake data using correct diffusion behavior
- Definition of cut values and gain dependence
[ ]
plot showing 5.9 keV data linear dependence of gain & cut value[ ]
[ ]
Numbers for effective efficiency comparing real data & fake diffusion data. 5.9 line matches essentially exactly![ ]
For each CDL dataset:- plot of CDL data + fake data w/ diffusion
[ ]
Some plot correlating NN cut value, fake data, gas gain behavior & absorption length. Something something.
- Difference in Run-2 and Run-3 behavior
1.22.5. Fake data generation
It seems like when generating very low energy events (1 keV) the diffusion we simulate is significantly larger than what is seen in the CDL data, see: (consider length, width, RMS plots)
This is using
df.add ctx.handleFakeData(c, "8_Fake1.0", 0.93, FakeDesc(kind: fkDiffusion, λ: 1.0 / 7.0))
Of course it is quite possible that the 1/7 is not quite right (we don't compute it ourselves yet after all). But even if it was 1/6 or 1/5 it wouldn't change anything significantly. The CDL events are simply quite a bit smaller.
But of course, note that the CDL data has much fewer hits in it than the equivalent fake data. This will likely strongly impact what we would see. The question of course is whether the change is due to fewer electrons or also just less diffusion.
The CDL data was mostly taken with a hotter detector. With rising
temperatures the diffusion should increase though, no? At least
according to the PyBoltz simulation (see
sec. [BROKEN LINK: sec:simulation:diffusion_coefficients_cast] in
StatusAndProgress
).
I'm not quite sure what to make of that.
Let's rerun the fake data gen code for 1 keV with a lower diffusion,
e.g. 540. We generate the plots in /tmp/
The result is slightly too small:
See the RMS and width / length plots.
Let's try 580.
So somewhere in the middle. Maybe 560 or 570.
According to Magboltz (see below) the value should indeed lie somewhere around 660 or so even in the case of 1052 mbar pressure as seen in the CDL data. PyBoltz gives smaller numbers, but still larger than 620.
One interesting thought:
[X]
What does the average length look like for a cluster with even less energy than the 930 eV case? One similar to the number of hits of the CDL 930 eV data? -> It clearly seems like at least the RMS values are fully unaffected by having less hits. The width and length become slightly smaller, but not significantly as to this be the deciding factor between the CDL data and fake data.
1.22.6. DONE Testing Magboltz & computing diffusion based on pressure
IMPORTANT: The description on https://magboltz.web.cern.ch/magboltz/usage.html seems to be WRONG. In it it says the first 'input card' has 3 inputs. But nowadays apparently it has 5 inputs.
Compile:
gfortran -o magboltz -O3 magboltz-11.16.f
Argon isobutane test file at 25°C and 787.6 Torr = 1050 mbar and an electric field of 500 V/cm.
2 5 0 1 0.0 2 11 80 80 80 80 97.7 2.3 0.0 0.0 0.0 0.0 25.0 787.6 500.0 0.0 0.0 0
I ran it with different input files now and also ran PyBoltz in different cases.
All cases are the same gas and voltage, and at 25°C.
1050 mbar, 5e7 collisions (same as above):
2 5 0 1 0.0 2 11 80 80 80 80 97.7 2.3 0.0 0.0 0.0 0.0 25.0 787.6 500.0 0.0 0.0 0
./resources/magboltz_results/output_optimized_ar_iso_1050mbar_5e7.txt The main results section:
Z DRIFT VELOCITY = 0.2285E+02 MICRONS/NANOSECOND +- 0.06% Y DRIFT VELOCITY = 0.0000E+00 MICRONS/NANOSECOND +- 0.00% X DRIFT VELOCITY = 0.0000E+00 MICRONS/NANOSECOND +- 0.00% DIFFUSION IN CM**2/SEC. TRANSVERSE DIFFUSION = 0.4380D+04 +- 13.96% = 0.95831 EV. +- 13.956% = 619.132 MICRONS/CENTIMETER**0.5 +- 6.98% LONGITUDINAL DIFFUSION = 0.7908D+03 +- 5.8% = 0.1730 EV. +- 5.84% = 263.090 MICRONS/CENTIMETER**0.5 +- 2.92%
1052 mbar, 5e7 collisions:
2 5 0 1 0.0 2 11 80 80 80 80 97.7 2.3 0.0 0.0 0.0 0.0 25.0 789.0 500.0 0.0 0.0 0
./resources/magboltz_results/output_optimized_ar_iso_1052mbar_5e7.txt Results:
Z DRIFT VELOCITY = 0.2287E+02 MICRONS/NANOSECOND +- 0.06% Y DRIFT VELOCITY = 0.0000E+00 MICRONS/NANOSECOND +- 0.00% X DRIFT VELOCITY = 0.0000E+00 MICRONS/NANOSECOND +- 0.00% DIFFUSION IN CM**2/SEC. TRANSVERSE DIFFUSION = 0.3984D+04 +- 9.35% = 0.87119 EV. +- 9.350% = 590.320 MICRONS/CENTIMETER**0.5 +- 4.67% LONGITUDINAL DIFFUSION = 0.7004D+03 +- 7.8% = 0.1531 EV. +- 7.76% = 247.499 MICRONS/CENTIMETER**0.5 +- 3.88%
1050 mbar, 1e8 collisions:
2 10 0 1 0.0 2 11 80 80 80 80 97.7 2.3 0.0 0.0 0.0 0.0 25.0 787.6 500.0 0.0 0.0 0
./resources/magboltz_results/output_optimized_ar_iso_1050mbar_1e8.txt Results:
Z DRIFT VELOCITY = 0.2286E+02 MICRONS/NANOSECOND +- 0.05% Y DRIFT VELOCITY = 0.0000E+00 MICRONS/NANOSECOND +- 0.00% X DRIFT VELOCITY = 0.0000E+00 MICRONS/NANOSECOND +- 0.00% DIFFUSION IN CM**2/SEC. TRANSVERSE DIFFUSION = 0.5027D+04 +- 6.39% = 1.09929 EV. +- 6.394% = 663.109 MICRONS/CENTIMETER**0.5 +- 3.20% LONGITUDINAL DIFFUSION = 0.8695D+03 +- 11.6% = 0.1901 EV. +- 11.59% = 275.781 MICRONS/CENTIMETER**0.5 +- 5.79%
1052 mbar, 1e8 collisions:
2 10 0 1 0.0 2 11 80 80 80 80 97.7 2.3 0.0 0.0 0.0 0.0 25.0 789.0 500.0 0.0 0.0 0
./resources/magboltz_results/output_optimized_ar_iso_1052mbar_1e8.txt Results:
Z DRIFT VELOCITY = 0.2288E+02 MICRONS/NANOSECOND +- 0.06% Y DRIFT VELOCITY = 0.0000E+00 MICRONS/NANOSECOND +- 0.00% X DRIFT VELOCITY = 0.0000E+00 MICRONS/NANOSECOND +- 0.00% DIFFUSION IN CM**2/SEC. TRANSVERSE DIFFUSION = 0.4960D+04 +- 6.39% = 1.08401 EV. +- 6.386% = 658.486 MICRONS/CENTIMETER**0.5 +- 3.19% LONGITUDINAL DIFFUSION = 0.6940D+03 +- 10.3% = 0.1517 EV. +- 10.26% = 246.304 MICRONS/CENTIMETER**0.5 +- 5.13%
1050 mbar, 3e8 collisions:
2 30 0 1 0.0 2 11 80 80 80 80 97.7 2.3 0.0 0.0 0.0 0.0 25.0 787.6 500.0 0.0 0.0 0
./resources/magboltz_results/output_optimized_ar_iso_1050mbar_3e8.txt Results:
Z DRIFT VELOCITY = 0.2285E+02 MICRONS/NANOSECOND +- 0.02% Y DRIFT VELOCITY = 0.0000E+00 MICRONS/NANOSECOND +- 0.00% X DRIFT VELOCITY = 0.0000E+00 MICRONS/NANOSECOND +- 0.00% DIFFUSION IN CM**2/SEC. TRANSVERSE DIFFUSION = 0.5062D+04 +- 2.83% = 1.10750 EV. +- 2.826% = 665.582 MICRONS/CENTIMETER**0.5 +- 1.41% LONGITUDINAL DIFFUSION = 0.6860D+03 +- 3.8% = 0.1501 EV. +- 3.78% = 245.029 MICRONS/CENTIMETER**0.5 +- 1.89%
1052 mbar, 3e8 collisions:
2 30 0 1 0.0 2 11 80 80 80 80 97.7 2.3 0.0 0.0 0.0 0.0 25.0 789.0 500.0 0.0 0.0 0
./resources/magboltz_results/output_optimized_ar_iso_1052mbar_3e8.txt
Z DRIFT VELOCITY = 0.2286E+02 MICRONS/NANOSECOND +- 0.02% Y DRIFT VELOCITY = 0.0000E+00 MICRONS/NANOSECOND +- 0.00% X DRIFT VELOCITY = 0.0000E+00 MICRONS/NANOSECOND +- 0.00% DIFFUSION IN CM**2/SEC. TRANSVERSE DIFFUSION = 0.5016D+04 +- 4.98% = 1.09691 EV. +- 4.982% = 662.394 MICRONS/CENTIMETER**0.5 +- 2.49% LONGITUDINAL DIFFUSION = 0.7799D+03 +- 5.0% = 0.1705 EV. +- 4.98% = 261.183 MICRONS/CENTIMETER**0.5 +- 2.49%
Compare the transverse diffusion coefficients and their uncertainty. Magboltz is very bad at estimating uncertainties… The final numbers using 3e8 collisions seems to be the most likely correct number.
PyBoltz does not very any better, it actually is worse. When running with 5e7 it spits out numbers from 640 (1050) to 520 (1052)! At also 3e8 it says (./../src/python/PyBoltz/examples/test_argon_isobutane.py)
Running with Pressure: 787.6 Input Decor_Colls not set, using default 0 Input Decor_LookBacks not set, using default 0 Input Decor_Step not set, using default 0 Input NumSamples not set, using default 10 Trying 5.6569 Ev for final electron energy - Num analyzed collisions: 29900000 Calculated the final energy = 5.6568542494923815 Velocity Position Time Energy DIFXX DIFYY DIFZZ 22.9 2.0 86520417.1 1.1 2587.6 4848.9 0.0 22.9 4.0 172684674.2 1.1 3408.6 4620.1 0.0 22.9 5.9 259670177.0 1.1 4610.1 4246.2 565.9 22.9 7.9 346216335.9 1.1 4754.5 4376.4 530.8 22.9 9.9 432794455.8 1.1 4330.8 4637.1 589.1 22.9 11.9 519476567.6 1.1 4518.5 4490.4 681.0 22.9 13.9 606130691.2 1.1 4794.2 4499.9 661.0 22.9 15.9 692948117.3 1.1 5149.8 4469.2 687.0 22.9 17.8 779615117.3 1.1 5307.6 4350.5 650.8 22.9 19.8 866106715.1 1.1 5072.8 4376.6 644.3 Running with Pressure: 789.0 Trying 5.6569 Ev for final electron energy - Num analyzed collisions: 29900000 Calculated the final energy = 5.6568542494923815 Velocity Position Time Energy DIFXX DIFYY DIFZZ 22.9 2.0 86667248.5 1.1 3048.9 3483.7 0.0 22.9 4.0 173345020.6 1.1 4088.2 4967.2 0.0 22.9 5.9 260048909.2 1.1 3789.2 5445.7 495.5 22.9 7.9 346875330.4 1.1 4351.8 5540.9 631.9 22.9 9.9 433290608.8 1.1 3928.3 5060.4 979.6 22.9 11.9 519876593.3 1.1 4255.1 4938.9 910.2 22.9 13.9 606223061.1 1.1 4085.9 4610.8 862.2 22.9 15.9 693197771.7 1.1 4050.8 4661.8 818.2 22.9 17.9 780134417.2 1.1 4260.6 4753.5 803.4 22.9 19.9 866928793.5 1.1 4149.1 4876.4 808.5 time taken1544.6015286445618 α = 0.0 E = 500.0, P = 787.6, V = 22.88327213959691, DT = 4724.663347492957 DT1 = 642.6009590747105, DL = 644.3327626774859, DL1 = 237.30726957910778 α = 0.0 E = 500.0, P = 789.0, V = 22.912436886892305, DT = 4512.734382794393 DT1 = 627.6235672201768, DL = 808.4762345648321, DL1 = 265.65193649208607
642 vs 627. Still a rather massive difference here!
All this is very annoying, but we can be sure that higher pressures lead to lower diffusion. The extent that this is visible in the CDL data though seems to imply that there is something else going on at the same time.
1.22.7. TODO Think about rmsTransverse cuts
Christoph used the transverse RMS cuts at about 1.0 or 1.1 as the 'X-ray cleaning cuts'.
In the paper ./Papers/gridpix_energy_dependent_features_diffusion_krieger_1709.07631.pdf he reports a diffusion coefficient of ~470μm/√cm which is way lower than what we get from Magboltz.
With that number and the plots presented in that paper the 1.0 or 1.1 RMS transverse number is justifiable. However, in our data it really seems like we are cutting away some good data for the CDL data when cutting it (or similarly when we apply those cleaning cuts elsewhere).
I'm not sure how sensible that is.
However, one interesting idea would be to look at the 2014/15 data under the same light as done in the effective efficiency tool, i.e. plot a ridgeline of the distributions for escape and photo peaks. Do we reproduce the same RMS transverse numbers that Christoph gets?
One possibility is that our transverse RMS number is calculated differently?
If our code reproduces Christophs RMS values -> in the data, if not -> in the algorithm.
1.22.8. TODO Look at dependence of NN cut value depending on diffusion coefficient & absorption length
If we make the same plot as for the gas gain but using the diffusion coefficient & the absorption length, but leaving everything else the same, how does the cut value change?
[ ]
NN cut value @ desired efficiency vs diffusion coefficient (different coefficient fake data!)[ ]
NN cut value @ desired efficiency vs absorption length (different absorption length fake data!)[ ]
NN cut value @ desired efficiency vs energy at real absorption lengths from fake data
1.23.
Can we make a fit to the rms transverse data of each 55Fe run, then use that upper limit of the transverse RMS to compute the diffusion and finally determine the cut value based on fake data with that diffusion and gas gain?
Goal: Determine NN cut value to use for a given cluster to achieve an efficiency of a desired value.
I have:
- CDL data that can be used to determine cut values for different energy. They have different gas gains and diffusion coefficients.
- Fake data for arbitrary energies, absorption lengths and diffusion coefficients
- real 5.9 keV data at different gas gains and diffusion coefficients.
What do I need: A relationship that maps a cut value from a CDL energy range to a cluster of different diffusion and gas gain.
How do I understand the dependence of diffusion and gas gain on NN cut value? Use upper percentile (e.g. 95) of rmsTransverse as an easy to use proxy for the diffusion in a dataset. Compute that value for each CDL run. Do same for every 55Fe run. Plot gas gain vs rmsTransverse for all this data.
Thoughts on diffusion & gas gain:
[ ]
INSERT PLOT OF RMS T VS GAS GAIN & CUT VAL
For hidden 2500 with skewed fixed:
- ./Figs/statusAndProgress/neuralNetworks/development/hidden_2500neurons_fixed_skew/rmsTransverse_vs_NN_cutVal.pdf
- ./Figs/statusAndProgress/neuralNetworks/development/hidden_2500neurons_fixed_skew/rmsTransverse_vs_gas_gain.pdf
For AdamW@3e-4 lr:
- ./../../../org/Figs/statusAndProgress/neuralNetworks/development/adamW_2500_3e-4/rmsTransverse_vs_NN_cutVal.pdf
- ./../../../org/Figs/statusAndProgress/neuralNetworks/development/adamW_2500_3e-4/rmsTransverse_vs_gas_gain.pdf
Higher gas gains are associated with longer transverse RMS values and lower NN cut values. Higher gas gains are associated with lower temperatures. Lower temperatures mean higher densities at the same pressure. Higher densities imply shorter absorption lengths. Shorter absorption lengths imply longer drift distances. Longer drift distances imply larger diffusion. Larger diffusion implies larger rms transverse.
So it may not actually be that the gas diffusion changes significantly (or only?), but that the change in density implies a change in average diffusion value.
Keep in mind that the crosses for 3 keV are the escape photons and not real 3 keV data!
The two are certainly related though.
Maybe rms transverse of CDL data is different due to different pressure?
[ ]
Make the same plot but instead of 3 keV escape photons generate 3 keV events at different diffusions
1.23.1. STARTED Train MLP with rmsTransverse cutoff closer to 1.2 - 1.3
This changed the rms transverse cut in the X-ray cleaning cuts and logL cuts to 1.2 to 1.3 (depending on energy range).
To see whether a larger rms transverse in the training data changes the way the model sees something as X-ray.
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/30_03_23_adamW_2500_3e-4_largerRmsT/trained_model_adamW_2500_3e-4_largerRmsT.pt \ --plotPath ~/org/Figs/statusAndProgress/neuralNetworks/development/adamW_2500_3e-4_largerRmsT/ \ --numHidden 2500 \ --learningRate 3e-4
And with SGD once more:
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/30_03_23_SGD_2500_3e-4_largerRmsT/trained_model_SGD_2500_3e-4_largerRmsT.pt \ --plotPath ~/org/Figs/statusAndProgress/neuralNetworks/development/SGD_2500_3e-4_largerRmsT/ \ --numHidden 2500 \ --learningRate 3e-4
[ ]
Think about introducing dropout layer? So that we might reduce overtraining especially in the AdamW case?
[ ]
Plot all datasets against the cut value. For the AdamW model different rmsT values are almost independent of the cut value. For SGD it is extremely linear.[X]
Plot raw NN prediction value against all datasets. -> This one is not so helpful (but we still generate it), but more useful is a version that only looks at the mean values of the lower, mid and upper 33% quantiles.
Interestingly the different models react quite differently in terms of what affects the cut efficiency!
[ ]
Add plots
[ ]
Try generating fake data and determining cut value from that, then use it on 55Fe
[ ]
Why not just generate fake data at the energies used in CDL for all runs and use those as reference for cut?
1.24.
[ ]
VerifyplotDatasets
distributions of all fake data events! -> make this plot comparing CDL & fake data of same 'kind' by each set
1.24.1. Diffusion from data rmsTransverse
./../CastData/ExternCode/TimepixAnalysis/Tools/determineDiffusion/determineDiffusion.nim
-> Very useful! Using it now to fit to the rms transverse dataset to extract the diffusion from real data runs. Then generate fake data of a desired energy that matches this diffusion. It matches very well it seems!
We should make plots of the data distributions for the real vs fake data, but also on a single run by run basis.
1.25.
Interesting observation:
Today we were trying to debug the reason why our sampled data seems to have a mean in the centerX and in particular centerY position that is not centered at ~7mm (128). Turns out, our original real data from which we sample has the same bias in the centerY position. Why is that? Don't we apply the same cuts in fake gen data reading as in the effective efficiency code?
-> The only difference between the two sets of cuts in the data reading is that in the fake data reading we do not apply an energy cut to the data. We only apply the xray cleaning cuts!
1.26.
Finally implemented NN veto with run local cut values based on fake
data in likelihood
[ ]
INSERT FIGURES OF EFFECTIVE EFFICIENCY USING FAKE CUT VALUES
likelihood \ -f ~/CastData/data/DataRuns2017_Reco.h5 \ --h5out /tmp/testing/run2_only_mlp_local_0.95.h5 \ --region crGold \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --mlp ~/org/resources/nn_devel_mixing/30_03_23_SGD_2500_3e-4_largerRmsT/trained_model_SGD_2500_3e-4_largerRmsT.pt \ --nnSignalEff 0.95 \ --nnCutKind runLocal \ --calibFile ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --readonly
Ah, but we cannot directly sample fake events in our current approach for the background datasets, because we still rely on real events as a starting point for the sampling.
Therefore: develop sampling of full events using gas gain and polya sampling.
- check what info is stored in H5 files regarding polya and gas gain -> nothing useful about threshold
- SCurve should in theory tell us something about threshold
[ ]
investigate if all polyas in a single file (Run-2 or Run-3) have the same threshold if determined from data, e.g. using quantile 1
- sample from existing fixed parameter polya and check it looks reasonable
- open H5 file run by run, look at minima and quantile 1 data for each gas gain slice. -> The minima is fixed for each set of chip calibrations! Run-2: Run-2: minimum: 1027.450870326596, quantile 1: 1247.965130336289 or 1282.375318596229 Run-3: minimum: 893.4812944899318, quantile 1: 1014.494678806292 or 1031.681385601604 -> this is useful! Means we can just sample with fixed cutoffs for each dataset and don't need raw charge data for anything for a run, but only gas gain slice fit parameters! -> Raises question though: How does then this relate to # of hits in photo peak? Lower gain, closer to cutoff, thus less pixels, but opposite? How do we get to too many pixels?
- plot photo peak hit positions against gas gain
- sample from a poyla using parameters as read from gas gain slices, plot against polya dataset -> Looks reasonable. Now need to include the cutoff. -> Looks even better. Note that the real data looks ugly due to the equal bin widths that are not realistic.
Ahh! Idea: we can reuse the gas gain vs energy calibration factor fit! We get the gas gain of a run for which to generate fake data. That gas gain is fed into that fit. The result is a calibration factor that tells us the charge corresponding to a given energy (or its inverse). Place our target energy into the function to get the desired charge. Then sample from a normal distribution around the target charge as the target for each event. Generate pixels until total charge close to target. -> Until total charge close or alternatively: Given gas gain calculate number based on target charge: e.g. 600,000 target and gain of 3,000) then target hits is target / per hit = 200 in this case. In this case we can get less due to threshold effects, but not more. So: better to do the former? Draw from target charge distribution and then accumulate pixels until the total drawn is matched?
Question: In the real data, are the total charge and the number of hits strongly correlated? It's important to understand how to go from a deposited energy to a number of electrons. There are multiple reasons to lose electrons and charge from a fixed input energy:
- amplifications below the threshold of a pixel
- higher / lower than average ionization events than Wi = 26 eV
We can model the former, but the latter is tricky.
1.27.
Meeting with Klaus today:
Main take away: Train an MLP using purely generated data (for X-rays) with:
- an extra input neuron that corresponds to the transverse diffusion of the 'dataset' each.
- different MLPs, one for each input of the diffusion parameter
The former seems better to me. Just generate data with a uniform distribution in some range of diffusion parameters. Then each input actually has different values. When actually applying the network then we only have few distinct values (one for each run) of course. But that's fine.
Start a training run with the σT
dataset!
With 100,000 generated events for each calibration file (why?). Still using the fixed cutoff of Run-3! This also uses a uniform distribution between G = 2400 .. 4500 σT = 550 .. 700 θ = 2.1 .. 2.4
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/12_04_23_sgd_sim_data_diffusion/trained_mlp_sgd_sim_data_diffusion.pt \ --plotPath ~/Sync/sgd_sim_data_diffusion/ \ --numHidden 2500 \ --datasets eccentricity \ --datasets skewnessLongitudinal \ --datasets skewnessTransverse \ --datasets kurtosisLongitudinal \ --datasets kurtosisTransverse \ --datasets length \ --datasets width \ --datasets rmsLongitudinal \ --datasets rmsTransverse \ --datasets lengthDivRmsTrans \ --datasets rotationAngle \ --datasets fractionInTransverseRms \ --datasets σT \ --learningRate 3e-4 \ --simulatedData
At first glance it seems like the cut values determined from generating more fake data of the σT and gain of the real runs yields effective efficiencies that are all too high (95%, only some CDL runs approaching 80% target).
Now with normal distribution in G and σT:
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/12_04_23_sgd_sim_gauss_diffusion/trained_mlp_sgd_sim_gauss_diffusion.pt \ --plotPath ~/Sync/sgd_sim_gauss_diffusion/ \ --numHidden 2500 \ --datasets eccentricity \ --datasets skewnessLongitudinal \ --datasets skewnessTransverse \ --datasets kurtosisLongitudinal \ --datasets kurtosisTransverse \ --datasets length \ --datasets width \ --datasets rmsLongitudinal \ --datasets rmsTransverse \ --datasets lengthDivRmsTrans \ --datasets rotationAngle \ --datasets fractionInTransverseRms \ --datasets σT \ --learningRate 3e-4 \ --simulatedData
-> Continue training this if this is useful. Still a lot of downward trend in loss visible!
In addition it may be a good idea to also try it with the gain as another input?
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/12_04_23_sgd_sim_diffusion_gain/trained_mlp_sgd_sim_diffusion_gain.pt \ --plotPath ~/Sync/sgd_sim_diffusion_gain/ \ --numHidden 2500 \ --datasets eccentricity \ --datasets skewnessLongitudinal \ --datasets skewnessTransverse \ --datasets kurtosisLongitudinal \ --datasets kurtosisTransverse \ --datasets length \ --datasets width \ --datasets rmsLongitudinal \ --datasets rmsTransverse \ --datasets lengthDivRmsTrans \ --datasets rotationAngle \ --datasets fractionInTransverseRms \ --datasets σT \ --datasets gasGain \ --learningRate 3e-4 \ --simulatedData
1.28.
[ ]
Understand effective efficiency with gas gain parameter ofFakeDesc
. E.g. rerun the gauss diffusion network again after changing code etc[ ]
verify the number of neighboring pixels that are actually active anywhere! -> We'll write a short script that extracts that information from a given file.
[ ]
Correlate with gas gain and extracted diffusion![ ]
Compare with generated fake data without artificial neighbor activation and with![ ]
How do neighbors relate in their charge?
In ./../CastData/ExternCode/TimepixAnalysis/Tools/countNeighborPixels:
./countNeighborPixels -f ~/CastData/data/CalibrationRuns2017_Reco.h5
Average neighbors in run 110 = 55.09144736842105 Average neighbors in run 175 = 57.15461200585651 Average neighbors in run 122 = 47.4565240584658 Average neighbors in run 126 = 52.52208235545125 Average neighbors in run 183 = 59.16529930112428 Average neighbors in run 161 = 66.33755847291384 Average neighbors in run 116 = 51.58539765319426 Average neighbors in run 155 = 68.88583638583638 Average neighbors in run 173 = 57.67820710973725 Average neighbors in run 151 = 66.58298001211386 Average neighbors in run 153 = 68.5710128055879 Average neighbors in run 108 = 55.27580484566877 Average neighbors in run 93 = 55.01024327784891 Average neighbors in run 147 = 61.76867469879518 Average neighbors in run 179 = 63.32529743268628 Average neighbors in run 159 = 66.94385479157053 Average neighbors in run 163 = 63.48872858431019 Average neighbors in run 118 = 52.85084521047398 Average neighbors in run 102 = 54.66866666666667 Average neighbors in run 177 = 56.74977000919963 Average neighbors in run 181 = 60.51956253850894 Average neighbors in run 165 = 59.47996965098634 Average neighbors in run 167 = 61.3946587537092 Average neighbors in run 185 = 57.19969558599696 Average neighbors in run 149 = 67.73385167464114 Average neighbors in run 157 = 69.89698937426211 Average neighbors in run 187 = 58.77159520807061 Average neighbors in run 171 = 59.13408330799635 Average neighbors in run 128 = 52.05005608524958 Average neighbors in run 169 = 58.22026431718061 Average neighbors in run 145 = 61.53792611101196 Average neighbors in run 83 = 51.74536148432502 Average neighbors in run 88 = 46.48597521200261 Average neighbors in run 120 = 49.65804645033767 Average neighbors in run 96 = 51.49257633765991
./countNeighborPixels -f ~/CastData/data/CalibrationRuns2018_Reco.h5
Average neighbors in run 304 = 83.96286764705883 Average neighbors in run 286 = 83.78564713256033 Average neighbors in run 294 = 84.31031159653068 Average neighbors in run 277 = 89.82936363636364 Average neighbors in run 241 = 81.1551888289432 Average neighbors in run 284 = 89.4854306756324 Average neighbors in run 260 = 85.9630966706779 Average neighbors in run 255 = 88.56640625 Average neighbors in run 292 = 85.56208945886769 Average neighbors in run 288 = 80.55608820709492 Average neighbors in run 247 = 77.27826358525921 Average neighbors in run 257 = 88.13119266055045 Average neighbors in run 239 = 73.20087064676616 Average neighbors in run 302 = 86.77864992150707 Average neighbors in run 249 = 79.53715365239294 Average neighbors in run 271 = 79.5505486808312 Average neighbors in run 300 = 90.83248730964468 Average neighbors in run 296 = 85.71424050632912 Average neighbors in run 243 = 80.90277344967279 Average neighbors in run 264 = 83.18354637823664 Average neighbors in run 280 = 88.5948709880428 Average neighbors in run 253 = 85.09293373659609 Average neighbors in run 251 = 77.57475909232204 Average neighbors in run 262 = 82.7622203811102 Average neighbors in run 290 = 79.6481004507405 Average neighbors in run 275 = 89.19542053956019 Average neighbors in run 269 = 82.60137931034483 Average neighbors in run 266 = 83.25234248788368 Average neighbors in run 273 = 81.36689741976086 Average neighbors in run 245 = 78.7516608668143 Average neighbors in run 259 = 86.49635416666666 Average neighbors in run 282 = 90.84639199809479
And using fake data, for the case of no simulated neighbors: Run-2
./countNeighborPixels -f ~/CastData/data/CalibrationRuns2017_Reco.h5 --fake
Average neighbors in run 161 = 38.16306522609044 Average neighbors in run 128 = 29.87515006002401 Average neighbors in run 183 = 34.27851140456183 Average neighbors in run 185 = 33.6796718687475 Average neighbors in run 88 = 28.7222 Average neighbors in run 179 = 34.7138 Average neighbors in run 187 = 32.83693477390956 Average neighbors in run 155 = 39.0502 Average neighbors in run 163 = 35.952 Average neighbors in run 118 = 30.9526 Average neighbors in run 171 = 32.7596 Average neighbors in run 126 = 29.8646 Average neighbors in run 151 = 37.3554 Average neighbors in run 169 = 32.7694 Average neighbors in run 120 = 28.37575030012005 Average neighbors in run 102 = 30.4712 Average neighbors in run 159 = 37.93177270908363 Average neighbors in run 153 = 38.0882 Average neighbors in run 157 = 40.6742 Average neighbors in run 167 = 34.4742 Average neighbors in run 96 = 29.2946 Average neighbors in run 175 = 33.7786 Average neighbors in run 177 = 33.1542 Average neighbors in run 93 = 31.7502 Average neighbors in run 116 = 29.2908 Average neighbors in run 83 = 29.9446 Average neighbors in run 145 = 34.6312 Average neighbors in run 147 = 34.9114 Average neighbors in run 108 = 31.13945578231293 Average neighbors in run 122 = 27.9868 Average neighbors in run 181 = 34.3359343737495 Average neighbors in run 165 = 32.90716286514606 Average neighbors in run 173 = 32.5428 Average neighbors in run 149 = 38.0322 Average neighbors in run 110 = 31.2116
Run-3
./countNeighborPixels -f ~/CastData/data/CalibrationRuns2018_Reco.h5 --fake
Average neighbors in run 284 = 41.75550220088035 Average neighbors in run 259 = 39.1072 Average neighbors in run 292 = 39.506 Average neighbors in run 286 = 39.2786 Average neighbors in run 239 = 37.4598 Average neighbors in run 288 = 37.3952 Average neighbors in run 251 = 36.9704 Average neighbors in run 255 = 40.5622 Average neighbors in run 262 = 39.1088 Average neighbors in run 260 = 40.5608 Average neighbors in run 294 = 38.376 Average neighbors in run 280 = 40.51660664265706 Average neighbors in run 271 = 37.5112 Average neighbors in run 296 = 39.0084 Average neighbors in run 275 = 40.3642 Average neighbors in run 269 = 38.63645458183273 Average neighbors in run 302 = 38.698 Average neighbors in run 304 = 38.2386 Average neighbors in run 266 = 37.8624 Average neighbors in run 243 = 37.6958 Average neighbors in run 264 = 39.4124 Average neighbors in run 257 = 40.6936 Average neighbors in run 282 = 41.2402 Average neighbors in run 290 = 37.7166 Average neighbors in run 277 = 40.49369369369369 Average neighbors in run 253 = 38.238 Average neighbors in run 249 = 38.3926 Average neighbors in run 273 = 38.073 Average neighbors in run 247 = 37.3596 Average neighbors in run 245 = 36.2114 Average neighbors in run 241 = 37.1948 Average neighbors in run 300 = 39.7052
So the number of neighbors:
Period | Type | ~Neighbors per event |
---|---|---|
Run-2 | Real | 50-60 |
Run-3 | Real | 80-90 |
Run-2 | Fake | 30-35 |
Run-3 | Fake | 35-40 |
Activating neighbor sharing we can get those numbers up, but that's for later.
Next: Plot charge of pixels with neighbors against something.
Note: For run 288 the following:
if charge > 3500.0: # whatever # possibly activate a neighbor pixel! let activateNeighbor = rnd.rand(1.0) < 0.5 if activateNeighbor: let neighbor = rand(3) # [up, down, right, left] let totNeighbor = rnd.sample(psampler) / 2.0 # reduce amount case neighbor
yields very good agreement already, with the exception of the hits
data if the gain.G * 0.85
is not used (but instead G itself).
1.29.
In ./../CastData/ExternCode/TimepixAnalysis/Tools/countNeighborPixels we can now produce also histograms of the ToT values / charges recorded by the center chip (split by number of neighbors of a pixel). Three different versions, density of ToT, raw charge and density of charge.
The three are important because of the non-linearity of the ToT to charge conversion. The question is: which distribution should we really sample from to get the correct distribution of charges seen by the detector?
Running:
./countNeighborPixels -f ~/CastData/data/CalibrationRuns2017_Reco.h5 --chargePlots
yields the plots in:
We can see that the density based version of the charge histograms looks very much not like a Polya. The histogram of ToT looks somewhat like it and the raw charge one looks closest I would say.
The other thing we see here is that the real data shows a shift to larger charges for the data with more neighbors. The effect is not extreme, but visible. Much more so in the Run-3 data than in the Run-2 data though, which matches our expectations (see table from yesterday).
For the fake data:
./countNeighborPixels -f ~/CastData/data/CalibrationRuns2018_Reco.h5 --chargePlots --fake
yields: Figs/statusAndProgress/gasGainAndNeighbors/charges_neighbors_fake.pdf
At the very least the distributions currently generated do not match. Multiple reasons:
- our scaling of gas gain using G * 0.85 is bad (yes)
- currently we're sampling from a polya of the ToT values
We will now study the gas gains on a specific run, say 241. We'll try to imitate the look of the real 241 data in the fake data. Using:
./countNeighborPixels -f ~/CastData/data/CalibrationRuns2018_Reco.h5 --chargePlots --fake --run 241
we'll make changes and try to get it to look better.
Starting point:
#let gInv = invert(gain.G * 0.85, calibInfo) let gInv = invert(gain.G, calibInfo) # not using scaling # ... let ToT = rnd.sample(psampler) # sampling from ToT # and no neighbor activation logic
pdfunite *neighbor_run_charges_241* ~/org/Figs/statusAndProgress/gasGainAndNeighbors/run_241_comparison_start.pdf
First step, what does our 0.85 scaling actually do?
let gInv = invert(gain.G * 0.85, calibInfo) # ... let ToT = rnd.sample(psampler) # sampling from ToT # and no neighbor activation logic
./countNeighborPixels -f ~/CastData/data/CalibrationRuns2018_Reco.h5 --chargePlots --fake --run 241
pdfunite *neighbor_run_charges_241* ~/org/Figs/statusAndProgress/gasGainAndNeighbors/run_241_comparison_0.85_scaling.pdf
As expected, it makes the gains a bit smaller.
Notice how the ToT histogram of the real data has a much shorter tail than the fake data. At ToT = 150 it is effectively 0, but fake data still has good contribution there! Try scaling further down to see what that does.
First step, what does our 0.85 scaling actually do?
let gInv = invert(gain.G * 0.6, calibInfo) # ... let ToT = rnd.sample(psampler) # sampling from ToT # and no neighbor activation logic
./countNeighborPixels -f ~/CastData/data/CalibrationRuns2018_Reco.h5 --chargePlots --fake --run 241
pdfunite *neighbor_run_charges_241* ~/org/Figs/statusAndProgress/gasGainAndNeighbors/run_241_comparison_0.6_scaling.pdf
The tail now looks more correct (almost zero at 150 ToT in fake), but:
- the peak is way too far left compared to real data
[X]
why does the real data have ToT values up to 0, but the generated data has a sharp cutoff at ~10 or so?
charge 3516.613062845426 from ToT 64.00852228399629 for 1 is 893.4812944899318 charge 1405.389895032497 from ToT 26.41913859339533 for 1 is 893.4812944899318 charge 6083.063184722204 from ToT 88.29237432771937 for 1 is 893.4812944899318
This matches my expectation, but not the data. Ah! It's because of our 1.15 scaling, no?
charge 2295.67900197231 from ToT 47.40724170682395 for 1 is 893.4812944899318 for cutoff: 1027.503488663421 inverted cutoff: 9.0 charge 4940.614026607494 from ToT 78.30781008751393 for 1 is 893.4812944899318 for cutoff: 1027.503488663421 inverted cutoff: 9.0 charge 1318.261093845941 from ToT 23.29152025807923 for 1 is 893.4812944899318 for cutoff: 1027.503488663421 inverted cutoff: 9.0 charge 2643.805171369868 from ToT 52.90409160403075 for 1 is 893.4812944899318 for cutoff: 1027.503488663421 inverted cutoff: 9.0 charge 2102.98283203606 from ToT 43.92681742421028 for 1 is 893.4812944899318 for cutoff: 1027.503488663421 inverted cutoff: 9.0
Exactly!
The ToT behavior makes me worried in one sense: I really feel like the issue is that certain pixels have different thresholds, which makes the distribution so ugly.
Next, go back to sampling from actual polya and see what that looks like (without any scaling of gas gain):
let params = @[gain.N, gain.G, gain.theta] let psampler = initPolyaSampler(params, frm = 0.0, to = 20000.0) #invert(20000.0, calibInfo)) # ... let charge = rnd.sample(psampler) # sampling from ToT let ToT = invert(charge, calibInfo)
./countNeighborPixels -f ~/CastData/data/CalibrationRuns2018_Reco.h5 --chargePlots --fake --run 241
pdfunite *neighbor_run_charges_241* ~/org/Figs/statusAndProgress/gasGainAndNeighbors/run_241_comparison_from_polya.pdf
This does look much more realistic! So sampling from the real polya seems more sensible after all I think. Maybe it's a bit too wide though?
Let's look at scaling the theta parameter down by 30% (50%)
let params = @[gain.N, gain.G, gain.theta * 0.7] # 0.5 let psampler = initPolyaSampler(params, frm = 0.0, to = 20000.0) # ... let charge = rnd.sample(psampler) # sampling from ToT let ToT = invert(charge, calibInfo)
./countNeighborPixels -f ~/CastData/data/CalibrationRuns2018_Reco.h5 --chargePlots --fake --run 241
pdfunite *neighbor_run_charges_241* ~/org/Figs/statusAndProgress/gasGainAndNeighbors/run_241_comparison_polya_theta_smaller30.pdf
In the 50% versin of the density charge plot a difference becomes visible. The behavior towards the peak on the left is different. The peak moves a bit to smaller values approaching the cutoff more. But the width towards the tail does not really change.
In case the individual pixel thresholds are very different we could maybe approximate that using an exponential distribution from which we draw, which implies 100% not activating a pixel at cutoff and exponentially more likely to activate approaching the peak.
We implemented this by using the function:
proc expShift(x: float, u, l, p: float): float = ## Shifted exponential distribution that satisfies: ## f(l) = p, f(u) = 1.0 let λ = (u - l) / ln(p) result = exp(- (x - u) / λ) # ... ## XXX: make dependent on Run-2 or Run-3 data! const cutoffs = @[1027.450870326596, # Run-2 893.4812944899318] # Run-3 let cutoff = cutoffs[1] * 1.15 let actSampler = (proc(rnd: var Rand, x: float): bool = let activateThreshold = expShift(x, gain.G, cutoff, 1e-1) result = x > cutoff and rnd.rand(1.0) < activateThreshold echo "Charge ", x, " threshold: ", activateThreshold, " is ", result, " and cutoff ", cutoff ) let activatePixel = actSampler(rnd, charge) if not activatePixel: #charge < cutoff: continue
This is way too extreme:
pdfunite *neighbor_run_charges_241* ~/org/Figs/statusAndProgress/gasGainAndNeighbors/run_241_comparison_polya_exp_activation.pdf
the latter with p = 0.3 instead of 0.1.
I think this is the wrong approach. It pushes us even stronger to lager values. In the real charge density plot we need to be lower rather than higher.
Not using an exponential cutoff, but modifying both the gas gain and theta parameters yields the best result. But it's still ugly. Fortunately our latest network does not rely on the total charge anymore.
Let's run the effective efficiency for the dataset plots of run 241 comparison:
./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/12_04_23_sgd_sim_diffusion_gain/trained_mlp_sgd_sim_diffusion_gain.pt \ --ε 0.8 \ --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit \ --plotDatasets \ --plotPath ~/Sync/sgd_sim_diffusion_gain/effective_eff \ --run 241
Fake:
- too many hits
- energy mismatch
- eccentricity way too small!
- charge a bit too large
-> very bad match. Some of it is due to not using neighbors. But realistically at this gas gain we cannot justify more hits! So we need a higher gain to get less hits again.
Using
let params = @[gain.N, gain.G * 0.75, gain.theta / 3.0]
yields a good agreement in the histograms. with showing similar issues as above.
But first let's try to look at neighbors again:
if charge > 3500.0: # whatever # possibly activate a neighbor pixel! let activateNeighbor = rnd.rand(1.0) < 0.5 if activateNeighbor: let neighbor = rand(3) # [right, left, up, down] let totNeighbor = rnd.sample(psampler) / 2.0 # reduce amount case neighbor of 0: insert(xp + 1, yp, totNeighbor) of 1: insert(xp - 1, yp, totNeighbor) of 2: insert(xp, yp + 1, totNeighbor) of 3: insert(xp, yp - 1, totNeighbor) else: doAssert false totalCharge += calib(totNeighbor, calibInfo) insert(xp, yp, ToT)
same settings of gain etc as last above yields The issues are obvious. The neighbor histogram shows stark jumps (expected now that I think about it) and the eccentricity is still too low while energy a bit too high already.
Next: let's implement a strategy for smoother activation of neighbors. Let's go with a linear activation from 1000 to 10000 electrons, 0 to 1 chance and see what it looks like.
let neighSampler = (proc(rnd: var Rand, x: float): bool = let m = 1.0 / 9000.0 let activateThreshold = m * x - 1000 * m result = rnd.rand(1.0) < activateThreshold ) # ... if neighSampler(rnd, charge): # charge > 3500.0: # whatever # possibly activate a neighbor pixel! #let activateNeighbor = rnd.rand(1.0) < 0.5 let chargeNeighbor = rnd.sample(psampler) / 2.0 # reduce amount let activateNeighbor = actSampler(rnd, chargeNeighbor) if true: # activateNeighbor: let neighbor = rand(3) # [right, left, up, down] #let chargeNeighbor = rnd.sample(psampler) / 2.0 # reduce amount let totNeighbor = invert(chargeNeighbor, calibInfo) case neighbor of 0: insert(xp + 1, yp, totNeighbor) of 1: insert(xp - 1, yp, totNeighbor) of 2: insert(xp, yp + 1, totNeighbor) of 3: insert(xp, yp - 1, totNeighbor) else: doAssert false totalCharge += chargeNeighbor
This yields:
./countNeighborPixels -f ~/CastData/data/CalibrationRuns2018_Reco.h5 --chargePlots --fake --run 241 ./effective_eff_55fe ~/CastData/data/CalibrationRuns2018_Reco.h5 --model ~/org/resources/nn_devel_mixing/12_04_23_sgd_sim_diffusion_gain/trained_mlp_sgd_sim_diffusion_gain.pt --ε 0.8 --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 --evaluateFit --plotDatasets --plotPath ~/Sync/sgd_sim_diffusion_gain/effective_eff --run 241
Looking better. The neighbor distribution is more sensible, in particular in the ToT plot we can see a similar drift for larger neighboring cases as in the real data. However, the energy is now even bigger and the number of hits way too large (peak 300 compared to 260).
The total charge and energy being too large is probably simply an issue with our gas gain. Given that we don't use the charge or energy (but the gain in one network!) it's not that big of an issue. But the number of hits better be believable.
Given that we still have probably less higher neighbor cases, too low eccentricity and too many hits, likely implies:
- add higher neighbor cases with lower chance
- scale down the target charge by our modification of the gain?
The former first implementation:
let neighSampler = (proc(rnd: var Rand, x: float): int = ## Returns the number of neighbors to activate! let m = 1.0 / 9000.0 let activateThreshold = m * x - 1000 * m let val = rnd.rand(1.0) if val * 4.0 < activateThreshold: result = 4 elif val * 3.0 < activateThreshold: result = 3 elif val * 2.0 < activateThreshold: result = 2 elif val < activateThreshold: result = 1 else: result = 0 #result = rnd.rand(1.0) < activateThreshold ) # ... let numNeighbors = neighSampler(rnd, charge) if numNeighbors > 0: # charge > 3500.0: # whatever # possibly activate a neighbor pixel! #let activateNeighbor = rnd.rand(1.0) < 0.5 var count = 0 type Neighbor = enum Right, Left, Up, Down var seen: array[Neighbor, bool] while count < numNeighbors: let chargeNeighbor = rnd.sample(psampler) / 2.0 # reduce amount #let activateNeighbor = actSampler(rnd, chargeNeighbor) let neighbor = block: var num = Neighbor(rnd.rand(3)) while seen[num]: num = Neighbor(rnd.rand(3)) # [right, left, up, down] num seen[neighbor] = true #let chargeNeighbor = rnd.sample(psampler) / 2.0 # reduce amount let totNeighbor = invert(chargeNeighbor, calibInfo) case neighbor of Right: insert(xp + 1, yp, totNeighbor) of Left: insert(xp - 1, yp, totNeighbor) of Up: insert(xp, yp + 1, totNeighbor) of Down: insert(xp, yp - 1, totNeighbor) totalCharge += chargeNeighbor insert(xp, yp, ToT) totalCharge += charge inc count
Yeah, a look at the count histogram shows this is clearly rubbish. At
the very least we managed to produce something significantly more
eccentric than the real data, yet has still significantly less hits!
Sigh, bug in the insertion code…! The insert(xp, yp, ToT)
shouldn't be in that part!
This is looking half way reasonable for the neighbor histograms!
Wow, this looks pretty convincing in the property histograms! Aside
from having too many hits still, it looks almost perfect.
Let's be so crazy and run the effective efficiencies on all runs:
./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/12_04_23_sgd_sim_gauss_diffusion/trained_mlp_sgd_sim_gauss_diffusion.pt \ --ε 0.8 \ --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit \ --plotDatasets \ --plotPath ~/Sync/sgd_sim_gauss_diffusion/effective_eff ./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/12_04_23_sgd_sim_diffusion_gain/trained_mlp_sgd_sim_diffusion_gain.pt --ε 0.8 \ --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit \ --plotDatasets \ --plotPath ~/Sync/sgd_sim_diffusion_gain/effective_eff
For the gauss diffusion network The results are not as good as I hoped. But there is a high chance the numbers are also bad because the training data was not the way it looks now? Didn't run with the network knowing the gain due to the problems of having the gain be equivalent between real and fake data.
Let's look at the worst performing run, 282 from this plot: In this case the eccentricity is still too small compared to the real case.
Let's first fix the number of hits we see.
let gainToUse = gain.G * 0.75 let calibFactor = linearFunc(@[calibInfo.bL, calibInfo.mL], gainToUse) * 1e-6
introduced this variable that's used everywhere instead of
gain.G
. There were still some places we only used gain.G
!
(Note: oh: we achieved an efficiency of 81% on this run with the gain
diffusion network)
Takeaway:
- energy way too low
- hits way too low (190 compared to 260)
- eccentricity slightly too large
(Note: weird: changing the gain the reconstruct fake event call to use
the gainInfo.G * 0.75 we currently use, makes the effective efficiency
be 18% only, but the energy histogram does not change?? Ahh, it's
because of the gain being an input neuron! That of course means it
breaks the expectation of the trained network. Implies same for
network without gain should yield good result. -> Checked, it
does. Bigger question: why does the energy reconstruction still yield
such a low energy? Shouldn't that change if we change the apparent
gain?
-> OHH, it's because we use the hardcoded gain from the calibInfo
argument in the computeEnergy
function! Updating… Yup, energy is
looking perfect now:
but of course the effective efficiency is still completely off in this
network!)
So, next target is getting the hits in order. The reason must be the fact that our neighbor tot/charge histograms have a longer tail than the real data. Hmm, but the tail looks almost identical now…
I tried different gain values after all again and played around with counting the added neighboring pixels in the total charge or not counting them. It seems to me like going back to the original gain is the better solution. We still produce too few hits (to be thought about more), but at least we natively get the correct energy & charge. The histograms of the gain curves also don't look so horrible with our theta/3 hack.
and the properties: Fraction in transverse RMS is slightly too low though. But the effective efficiencies look quite good:
So next points to do:
[ ]
Maybe number of neighbors should include diagonal elements after all? If UV photons are the cause the distance can be larger maybe. Currently our code says > 110 neighbors for the fake data but O(90) for the real data in Run-3![ ]
investigate more into the number of hits we see. Can we fix it?[ ]
Look into how properties look now for CDL data. Maybe our slope etc of the neighbor logic needs to be adjusted for lower gains?[ ]
train a new network that uses the correct neighboring charges as training data
1.30.
First we changed the amount of charge for neighbors from div 2 to div 3:
let chargeNeighbor = rnd.sample(psampler) / 3.0 # reduce amount
This already improved the numbers a bit, also lowering the number of neighbors to a mean of 100.
Then we lowered the gas gain from 1.0 to 0.9 but not for the target charge.
let gainToUse = gain.G * 0.9 let calibFactor = linearFunc(@[calibInfo.bL, calibInfo.mL], gain.G) * 1e-6
i.e. leaving the calibration factor unchanged. Why? I have no idea, but it gets the job done.
Running the effective efficiencies of all runs now: -> This makes the efficiencies worse! But that could be an effect of having a network trained on the wrong data.
So now let's try to train a new network using our now better fake data and trying again. This now uses all our new changes and:
- gain 0.9 for everything but target charge
- possibly up to 4 neighbors
- neighbors receive 1/3 of sampled charge
- linear chance to activate neighbors
Also: we increase the range of theta parameters in the generation of
fake events from 2.1 .. 2.4
to 0.8 to 2.4
.
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/16_04_23_sgd_gauss_diffusion/trained_mlp_sgd_gauss_diffusion.pt \ --plotPath ~/Sync/16_04_23_sgd_sim_diffusion_gain/ \ --numHidden 2500 \ --datasets eccentricity \ --datasets skewnessLongitudinal \ --datasets skewnessTransverse \ --datasets kurtosisLongitudinal \ --datasets kurtosisTransverse \ --datasets length \ --datasets width \ --datasets rmsLongitudinal \ --datasets rmsTransverse \ --datasets lengthDivRmsTrans \ --datasets rotationAngle \ --datasets fractionInTransverseRms \ --datasets σT \ --learningRate 3e-4 \ --simulatedData
(Note: we had a short run by accident with only 10000 photons and
theta 2.1 to 2.4 that is stored in a directory with a 10000photons
part in the name)
There's some weirdness going on with the training. The outputs look
fine and the accuracy is good, but the training data loss function is
O(>100,000). Uhh.
In the meantime: looking at run 340 for energy 0.525 keV that has effective efficiency of ~90%. The biggest difference is in the length dataset! Our fake events are too long at the moment. Is the diffusion still bad for these datasets? YES. Hmmm.
[ ]
Why energy so wide in CDL data?[ ]
why length too long?
SO to understand this:
[ ]
check if CDL data we use in plotting of all properties is representative. Shouldn't it have cuts to CDL charges?[ ]
check rmsTransverse fit plots for this run. Is the fit good? Does it have still too much rubbish in there? Explanation of too much data for energy could be.
shows the energy seen in the CuEPIC0.9kV dataset (0.525 keV data) split by run based on CDL energy and charge energy. Run 340 has "all" energies, and is not correctly filtered?? What the heck. The code should be filtering both though. Is it our calibration of the energy being broken? I don't understand. I think it's the following:
if not fileIsCdl: ## We don't need to write the calibrated dataset to the `calibration-cdl` file, because ## the data is copied over from the `CDL_Reco` file! let df = df.filter(f{idx("Cut?") == "Raw"}) .clone() ## Clone to make sure we don't access full data underlying the DF! let dset = h5f.create_dataset(grpName / igEnergyFromCdlFit.toDset(), df.len, dtype = float, overwrite = true, filter = filter) dset.unsafeWrite(df["Energy", float].toUnsafeView(), df.len)
in cdl_spectrum_creation
. Combining the toUnsafeView
with the
filter
call is unsafe. Hence I added the clone
line there now!
See all plots with the bug present here:
UPDATE: Upon further inspection after fixing the code in other ways
again, it rather seems to be a different issue.
The updated plots are here:
(Note that the missing of the energy in Mn-Cr and Cu-Nu-15 in the
former plots is a cut on < 3 keV data in the plotting code).
With one exception (run 347) all runs that show behavior of very
varying energies in the charge energy dataset are those from runs
without the FADC! Why is that?
UPDATE groups
call over
run numbers it did line up correctly for exactly one run I think).
This is now fixed and the histograms of the energy
(And yes, some of the runs really have that little statistics after
the cuts! Check out:
for an overview of the fits etc)
Let's look at the plot of the rmsTransverse
from run 340 again:
It has changed a bit. Can't say it's very obvious, but the shape is
clearly less gaussian than before and the width to the right is a bit
larger?
What do the properties look like comparing run 340 now?
IMPORTANT The size difference has actually become worse now!
However, the effective efficiency still improves despite that. We
probably should look at the determination of the diffusion again now
that we actually look at the correct data for the CDL! Maybe the gauss
fit is now actually viable.
Now that we actually look at the correct data for all the CDL runs, let's look at the effective efficiencies again using the gauss diffusion network without gain from
: Aside from the 270 eV datasets all effective efficiencies have come much closer to the target efficiency! This is pretty great. See below for again looking at datasets comparing properties for the worst.Let's also look at this using the new network that was trained on the new fake data!
./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/16_04_23_sgd_gauss_diffusion/trained_mlp_sgd_gauss_diffusion.pt \ --ε 0.8 \ --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit \ --plotDatasets \ --plotPath ~/Sync/16_04_23_sgd_sim_diffusion_gain/effective_eff
It looks surprisingly similar to the other network. It seems like regardless they learned similar things in spite of the other network having seen quite different data.
[X]
Look atrmsTransverse
fit of run from earlier today again![X]
compare number of counts in the histograms before and after fix!
We'll continue the training of the model we created above
[ ]
LOOK At diffusion determination again after energy bug is fixed! Maybe the gauss fit method is now viable. In particular think about run 340 and how its length in the generated data is still too large! Also try to understand why runs for 250 eV are still so bad. ^– These are the properties of the worst run in terms of effective efficiency. By far the biggest issues are related to the size of the generated events and their eccentricity. We apparently generate way too many neighbors in this case? Investigate maybe.[ ]
STILL have to adjust the gas gain and diffusion ranges for the fake data in training to be more sensible towards the real numbers!
1.31.
Let's start by fixing the code to continue the training process.
We've added the accuracy and loss fields to the MLPDesc, updated
serialize.nim
to deal with missing data better (just left empty) and
will no rerun the training of the network from yesterday. We'll remove
the full network and start from 0 to 100,000 epochs and then start it
again to test (but this time containing the loss and accuracy values
in MLPDesc H5 file).
So again (see MLP section below) While MLP is running back to the diffusion from the bad runs, 340 and also 342. 342 the sizes and diffusion is actually quite acceptable, but the eccentricity is pretty wrong, likely due to too many neighbors? Maybe need a lower threshold after all? What is the gas gain of 342 compared to other CDL runs? 2350 or so, compared to 2390 for 340. So comparable.
First 340:
using gaussian fit with scale 1.65 again instead of limiting to 10% of
height of peak.
Better length & diffusion, but still too large. Let's check run 241
though before we lower further:
-> This is already too small! So the fit itself is not going to be a
good solution.
Comparing the RMS plots:
Given that drop off on RHS is MUCH stronger in 55Fe run, let's try a
fixed value as peak + fixed
. The run 241 data indicates something
like 0.15
to the peak position (0.95 + 0.15 = 1.1).
Run 241:
And 340:
This matches well for run 241, but in case of 340 it is way too short a cutoff.
What defines the hardness of the cutoff? Number of pixels? Not quite, run 342 is narrower than 340. But 342 used FADC and 340 did not! Fixed cutoff is also a fail for 342 though! Could we determine it by simulation? Do a simple optimization that uses fit or something as start, rmsTransverse of events using diffusion and energy, then if small enough, cutoff? Then maybe use Kolmogorov-Smirnov to determine if agreement? Seems like maybe the most sane approach?
[ ]
Should we also store the training data of the net inMLPDesc
?
1.31.1. MLP training
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/17_04_23_sgd_gauss_diffusion/trained_mlp_sgd_gauss_diffusion.pt \ --plotPath ~/Sync/17_04_23_sgd_sim_diffusion_gain/ \ --numHidden 2500 \ --datasets eccentricity \ --datasets skewnessLongitudinal \ --datasets skewnessTransverse \ --datasets kurtosisLongitudinal \ --datasets kurtosisTransverse \ --datasets length \ --datasets width \ --datasets rmsLongitudinal \ --datasets rmsTransverse \ --datasets lengthDivRmsTrans \ --datasets rotationAngle \ --datasets fractionInTransverseRms \ --datasets σT \ --learningRate 3e-4 \ --simulatedData
Now let's try to continue the training:
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/17_04_23_sgd_gauss_diffusion/trained_mlp_sgd_gauss_diffusion.pt \ --plotPath ~/Sync/17_04_23_sgd_sim_diffusion_gain/ \ --learningRate 3e-4 \ --simulatedData \ --continueAfterEpoch 100000
Given that the loss is still monotonically decreasing, I'll start another 100,000 epochs. And another 100,000 now!
1.32.
We've implemented the ideas from yesterday to determine the diffusion based on simulating events itself using some optimization strategy (currently Newton combined with Cramér-von Mises).
The annoying part is we need to look at not only one axis, but actually both and then determine the long and short axes. Otherwise our estimate of the rms transverse ends up too large, because we look at the mean RMS instead of transverse RMS.
Having implemented it (with probably improvable parameters) and running it yields: which is actually quite good.
Let's look at run 333 (1.49 keV), one of the lowest ones in the efficiency. Was our value for the diffusion bad in that one? Hmm, not really. It looks quite reasonable. But, as expected: this run did not use the FADC! Probably NN filters out double photon events. Let's check if all "bad" ones are no FADC runs (y = FADC, n = no FADC): 0.93 keV runs:
- 335 (y), 336 (n), 337 (n) -> 335 is the best one, 336 and 337 are indeed the ones with low efficiency!
0.525 keV runs:
- 339 (y), 340 (n) -> 339 is the good one, 340 is the bad one!
So yes, this really seems to be the cause!
Let's check our new network from yesterday on the same:
./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/17_04_23_sgd_gauss_diffusion/trained_mlp_sgd_gauss_diffusioncheckpoint_epoch_400000_loss_0.0115_acc_0.9975.pt \ --ε 0.8 --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit \ --plotDatasets \ --plotPath ~/Sync/17_04_23_sgd_sim_diffusion_gain/effective_eff/
As one could have hoped, the results look a bit better still.
Let's also look at the Run-2 data:
./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/17_04_23_sgd_gauss_diffusion/trained_mlp_sgd_gauss_diffusioncheckpoint_epoch_400000_loss_0.0115_acc_0.9975.pt \ --ε 0.8 --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit \ --plotDatasets \ --plotPath ~/Sync/17_04_23_sgd_sim_diffusion_gain/effective_eff/
shows a plot of both run periods together.
We switched from Newton's method to gradient descent. It is much more stable in the approach to the minimum: n The outliers to very low values are again all CDL runs without the FADC! The 5.9 CAST data does indeed still have values quite a bit below target. But the spread is quite nice. The CDL data generally comes closer to the target, but overall has a larger spread (but notice not within a target! with the exception of the no FADC runs).
From here: look at some more distributions, but generally this is good to go in my opinion. Need to implement logic into application of the MLP method now that takes care of not only calculating the correct cut for each run & energy, but also writes the actual efficiency that the real data saw for the target to the output files, as well as the variance of all runs maybe.
Idea: Can we somehow find out which events are thrown out by the network that should be kept according to the simulated data cut value? More or less the 'difference' between the 80% cut based on the real data and the 80% cut based on the simulated data. The events in that region are thrown out. Look at some and generate distributions of those events to understand what they are? Maybe they are indeed "rubbish"? -> Events in the two 80% cuts of real and simulated. UPDATE:
Hmm, at a first glance there does not seem to be much to see there. To a "human eye" they look like X-rays I would say. It's likely the correlation between different variables that is making them more background like than X-ray like. I implemented a short function that filters the data between the cut region of real and simulated data at target efficiency:./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/17_04_23_sgd_gauss_diffusion/trained_mlp_sgd_gauss_diffusioncheckpoint_epoch_400000_loss_0.0115_acc_0.9975.pt \ --ε 0.8 \ --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit \ --plotDatasets \ --plotPath ~/Sync/analyze_5.9/ \ --run 241
which gives the following property plots:
Next: Check how many elements we really need to generate for our Newton optimization. And can we:
- make the difference smaller again? -> Let's try with diffusion difference of 1 in numerical derivative (at 10k samples)
- use
Dual
after all? -> No. Tried again, but Cramér-von Mises also destroys all derivative information. - only use about 1000 elements instead of 10000? -> at 1000 the spread becomes quite a bit bigger than at 10k!
[ ]
Put 3.0 escape photon data for CAST back into plot!
1.33.
So, from yesterday to summarize: Using gradient descent makes the determination of the diffusion from real data fast enough and stable.
First continue a short look at the 'intermediate' data between the cut values of simulated and real data. We will plot the intermediate against the data that passes both. Running the same command as yesterday for run 241 yields: In direct comparison we can see that the events that fail to pass the simulated cut are generally a bit more eccentric and longer / wider / slightly bigger RMS. Potentially events including a captured escape photon? Let's extract the event numbers of those intermediate events:
Event numbers that are intermediate: Column of type: int with length: 107 contained Tensor: Tensor[system.int] of shape "[107]" on backend "Cpu" 104 109 123 410 537 553 558 1042 1272 1346 1390 1447 1527 1583 1585 1594 1610 1720 1922 1965 2082 2155 2176 2198 2419 2512 2732 2800 2801 3038 3072 3095 3296 3310 3473 3621 3723 3820 4088 4145 4184 4220 4250 4308 4347 4353 4466 4558 4590 4725 4843 4988 5204 5234 5288 5497 5637 5648 5661 5792 5814 5848 5857 6090 6175 6187 6312 6328 6359 6622 6657 6698 6843 6944 6951 7121 7137 7162 7192 7350 7436 7472 7545 7633 7634 7730 7749 7788 7936 8003 8014 8075 8214 8364 8441 8538 8549 8618 8827 8969 8991 9008 9026 9082 9102 9260 9292
Now we'll use plotData
to generate event displays for them:
plotData \ --h5file ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --runType rtCalibration \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --runs 241 \ --eventDisplay \ --septemboard \ --events 104 --events 109 --events 123 --events 410 --events 537 --events 553 --events 558 --events 1042 --events 1272 --events 1346 --events 1390 --events 1447 --events 1527 --events 1583 --events 1585 --events 1594 --events 1610 --events 1720 --events 1922 --events 1965 --events 2082 --events 2155 --events 2176 --events 2198 --events 2419 --events 2512 --events 2732 --events 2800 --events 2801 --events 3038 --events 3072 --events 3095 --events 3296 --events 3310 --events 3473 --events 3621 --events 3723 --events 3820 --events 4088 --events 4145 --events 4184 --events 4220 --events 4250 --events 4308 --events 4347 --events 4353 --events 4466 --events 4558 --events 4590 --events 4725 --events 4843 --events 4988 --events 5204 --events 5234 --events 5288 --events 5497 --events 5637 --events 5648 --events 5661 --events 5792 --events 5814 --events 5848 --events 5857 --events 6090 --events 6175 --events 6187 --events 6312 --events 6328 --events 6359 --events 6622 --events 6657 --events 6698 --events 6843 --events 6944 --events 6951 --events 7121 --events 7137 --events 7162 --events 7192 --events 7350 --events 7436 --events 7472 --events 7545 --events 7633 --events 7634 --events 7730 --events 7749 --events 7788 --events 7936 --events 8003 --events 8014 --events 8075 --events 8214 --events 8364 --events 8441 --events 8538 --events 8549 --events 8618 --events 8827 --events 8969 --events 8991 --events 9008 --events 9026 --events 9082 --events 9102 --events 9260 --events 9292
It really seems like the main reason for them being a bit more eccentric is the presence of potentially higher number of single pixels that are outliers? I suppose it makes sense that these would be rejected with a larger likelihood. However, why would such events appear more likely in the 5.9 keV CAST data than in simulated data?
For now I'm not sure what to make of this. Of course one could attempt to do something like use a clustering algorithm that ignores outliers etc. But all that doesn't seem that interesting, at least not if we don't understand why we have this distinction from simulated to real data in the first place.
So for now let's just continue.
First: add 3.0 keV data with 5.9 keV absorption length into effective efficiency plot and see how the efficiency fares:
Run-2 @ 99%:
likelihood \ -f ~/CastData/data/DataRuns2017_Reco.h5 \ --h5out ~/Sync/run2_17_04_23_mlp_local_0.99.h5 \ --region crGold \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --mlp ~/org/resources/nn_devel_mixing/17_04_23_sgd_gauss_diffusion/trained_mlp_sgd_gauss_diffusioncheckpoint_epoch_400000_loss_0.0115_acc_0.9975.pt \ --nnSignalEff 0.99 \ --nnCutKind runLocal \ --calibFile ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --readonly
Run-3 @ 99%:
likelihood \ -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out ~/Sync/run3_17_04_23_mlp_local_0.99.h5 \ --region crGold \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --mlp ~/org/resources/nn_devel_mixing/17_04_23_sgd_gauss_diffusion/trained_mlp_sgd_gauss_diffusioncheckpoint_epoch_400000_loss_0.0115_acc_0.9975.pt \ --nnSignalEff 0.99 \ --nnCutKind runLocal \ --calibFile ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --readonly
Need to run effective efficiency at 99% as well to know the real efficiencies.
NOTE: The gradient descent code unfortunately is still rather slow. Should we aim to cache the results for each run? Generally I'd prefer no caching and just making it faster though. Maybe we can pick an example that is pretty hard to converge and use that as a reference to develop? Ah, and we can cache stuff within a single program run, so that we only need to compute the stuff once for each run?
1.34.
- we know cache the results of the diffusion calculation, but still have to verify it works as intended
- for the background datasets the diffusion coefficient is now also
~correctly determined by sampling both the energy and position of
the data, i.e.
- uniform in drift distance (muons enter anywhere between cathode and anode)
- energy exponential distribution that has ~20-30% of flux at 10 keV of the flux at ~0 eV
[ ]
ADD PLOT
- verify the diffusion coefficients are correctly determined by calculating all of them for background and calibration and creating a plot showing the determined numbers -> maybe a good option to show the plots of the RMS data used and the generated data for each?
[ ]
implement calculation of variance and mean value for effective efficiencies for each run period (write to H5 out inlikelihood
)[X]
TAKE NOTES of the diffusion vs run plot[ ]
ADD EFFECTICE EFFICIENCY PLOT WITH 3 keV escape data
1.35.
Yesterday we turned determineDiffusion
into a usable standalone
tool to compute the diffusion parameters of all runs. It generates a
plot showing what the parameters are, colored by the 'loss' of the
best estimate (via Cramér-von Mises).
./determineDiffusion \ ~/CastData/data/DataRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/DataRuns2018_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5
yields the following plot of the diffusion values deduced using gradient descent:
We can clearly see two important aspects:
- the background diffusion parameters are typically a good 20-30 μm/√cm larger than the 5.9 Fe points
- there are some cases where:
- the loss is very large (background, runs close to run 80)
- the diffusion is exactly 660, our starting value
The latter implies that the loss is already smaller than 2 (our current stopping criterion). The former implies the distributions look very different than the simulated distribution. I've already seen one reason: in some runs there is an excessive contribution at rmsTransvers < 0.2 or so which has a huge peak. Some kind of noise signal? We'll check that.
For now here are all the plots of the rmsTransverse dataset for sim and real data that were generated from the best estimates:
The worst run is run 86 (see page 158 in the PDF). Let's investigate what kind of events contribute such data to the transverse RMS. My assumption would be noisy events at the top of the chip or something like that? First the ingrid distributions:
plotData \ --h5file ~/CastData/data/DataRuns2017_Reco.h5 \ --runType rtBackground \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --runs 86 \ --ingrid \ --cuts '("rmsTransverse", 0.0, 0.2)' \ --applyAllCuts \ --chips 3
So clearly defined in centerX and centerY. The sparky point on the grid?
Yeah, that seems to be it. They are all pretty much below 20 hits anyway. So we'll just introduce an additional cut for non CDL data that clusters must be larger 20 hits. They match the pixels we consider noisy for the background cluster plot and in limit!
Added such a filter (hits > 20) and running again. The diffusion values are now these. The largest is on the order of 3 and they look more reasonable. Of course the distinction between background and calibration is still presetn.
Next we've removed the limit of ks > 2.0
as a stopping criterion and
added logic to reset to best estimate if half number of bad steps
already done.
Let's rerun again: Now it actually looks pretty nice!
The difference between 5.9 and background is now pretty much
exactly 40. So we'll use that in getDiffusion
as a constant for now.
Finally, let's try to run likelihood
on one dataset (Run-2) at a
fixed efficiency to see what is what (it should read all diffusion values
from the cache).
The generated files are:
/Sync/run2_17_04_23_mlp_local_0.80.h5
/Sync/run2_17_04_23_mlp_local_0.90.h5
/Sync/run3_17_04_23_mlp_local_0.80.h5
/Sync/run3_17_04_23_mlp_local_0.90.h5
Time to plot a background rate from both, we compare it to the LnL only method at 80%:
plotBackgroundRate \ ~/Sync/run2_17_04_23_mlp_local_0.90.h5 \ ~/Sync/run3_17_04_23_mlp_local_0.90.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold.h5 \ --names "MLP@90" \ --names "MLP@90" \ --names "LnL@80" \ --names "LnL@80" \ --centerChip 3 \ --title "Background rate from CAST data, LnL@80%, MLP@90%" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_only_lnL_0.8_mlp_0.9.pdf \ --outpath ~/Sync/ \ --quiet
[INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.3288e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.9406e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@90 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.7298e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 2.2748e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.2088e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.1044e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@90 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 3.4122e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 1.7061e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1626e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5836e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@90 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 9.6914e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.1537e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.9703e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.5881e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@90 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 1.0413e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 4.1650e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.6383e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.5958e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@90 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 3.3067e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 8.2667e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.6551e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.0689e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@90 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.9383e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.4229e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.1788e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.3631e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@90 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 9.3572e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.5595e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:DataFrame with 7 columns and 116 rows: Idx Energy Rate totalTime RateErr Dataset yMin yMax dtype: float float constant float string float float 0 0.4 5.98 3159 1.026 LnL@80 4.955 7.006 1 0.6 6.332 3159 1.055 LnL@80 5.277 7.387 2 0.8 5.98 3159 1.026 LnL@80 4.955 7.006 3 1 5.98 3159 1.026 LnL@80 4.955 7.006 4 1.2 3.166 3159 0.7462 LnL@80 2.42 3.912 5 1.4 2.99 3159 0.7252 LnL@80 2.265 3.715 6 1.6 2.814 3159 0.7036 LnL@80 2.111 3.518 7 1.8 3.166 3159 0.7462 LnL@80 2.42 3.912 8 2 1.583 3159 0.5277 LnL@80 1.055 2.111 9 2.2 1.759 3159 0.5562 LnL@80 1.203 2.315 10 2.4 0.8794 3159 0.3933 LnL@80 0.4861 1.273 11 2.6 1.583 3159 0.5277 LnL@80 1.055 2.111 12 2.8 2.287 3159 0.6342 LnL@80 1.652 2.921 13 3 5.101 3159 0.9472 LnL@80 4.154 6.048 14 3.2 5.277 3159 0.9634 LnL@80 4.313 6.24 15 3.4 4.221 3159 0.8617 LnL@80 3.36 5.083 16 3.6 3.342 3159 0.7667 LnL@80 2.575 4.109 17 3.8 2.111 3159 0.6093 LnL@80 1.501 2.72 18 4 0.7036 3159 0.3518 LnL@80 0.3518 1.055 19 4.2 1.055 3159 0.4308 LnL@80 0.6245 1.486
[INFO]:INFO: storing plot in /home/basti/Sync/backgroundrateonlylnL0.8mlp0.9.pdf
As we can see the rates are quite comparable. What is nice to see is that the Argon peak is actually more visible in the MLP data than in the LnL cut method. Despite higher efficiency the network performs essentially the same though. That's very nice! The question is what is the effective efficiency based on 5.9 keV data using 90%?
./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/17_04_23_sgd_gauss_diffusion/trained_mlp_sgd_gauss_diffusioncheckpoint_epoch_400000_loss_0.0115_acc_0.9975.pt \ --ε 0.9 \ --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit \ --plotDatasets \ --plotPath ~/Sync/run2_run3_gradient_descent_eff_0.9/
So we're actually looking at about 85% realistically.
And background rate with MLP@85%:
plotBackgroundRate \ ~/Sync/run2_17_04_23_mlp_local_0.85.h5 \ ~/Sync/run3_17_04_23_mlp_local_0.85.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold.h5 \ --names "MLP@85" \ --names "MLP@85" \ --names "LnL@80" \ --names "LnL@80" \ --centerChip 3 \ --title "Background rate from CAST data, LnL@80%, MLP@85%" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_only_lnL_0.8_mlp_0.85.pdf \ --outpath ~/Sync/ \ --quiet
[INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.3288e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.9406e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.4501e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 2.0418e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.2088e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.1044e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 2.6383e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 1.3192e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1626e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5836e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 8.4250e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 1.8722e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.9703e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.5881e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.9879e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.5952e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.6383e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.5958e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.9022e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 7.2554e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.6551e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.0689e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.7114e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.1392e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.1788e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.3631e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.5130e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.4188e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:DataFrame with 7 columns and 116 rows: Idx Energy Rate totalTime RateErr Dataset yMin yMax dtype: float float constant float string float float 0 0.4 5.98 3159 1.026 LnL@80 4.955 7.006 1 0.6 6.332 3159 1.055 LnL@80 5.277 7.387 2 0.8 5.98 3159 1.026 LnL@80 4.955 7.006 3 1 5.98 3159 1.026 LnL@80 4.955 7.006 4 1.2 3.166 3159 0.7462 LnL@80 2.42 3.912 5 1.4 2.99 3159 0.7252 LnL@80 2.265 3.715 6 1.6 2.814 3159 0.7036 LnL@80 2.111 3.518 7 1.8 3.166 3159 0.7462 LnL@80 2.42 3.912 8 2 1.583 3159 0.5277 LnL@80 1.055 2.111 9 2.2 1.759 3159 0.5562 LnL@80 1.203 2.315 10 2.4 0.8794 3159 0.3933 LnL@80 0.4861 1.273 11 2.6 1.583 3159 0.5277 LnL@80 1.055 2.111 12 2.8 2.287 3159 0.6342 LnL@80 1.652 2.921 13 3 5.101 3159 0.9472 LnL@80 4.154 6.048 14 3.2 5.277 3159 0.9634 LnL@80 4.313 6.24 15 3.4 4.221 3159 0.8617 LnL@80 3.36 5.083 16 3.6 3.342 3159 0.7667 LnL@80 2.575 4.109 17 3.8 2.111 3159 0.6093 LnL@80 1.501 2.72 18 4 0.7036 3159 0.3518 LnL@80 0.3518 1.055 19 4.2 1.055 3159 0.4308 LnL@80 0.6245 1.486
[INFO]:INFO: storing plot in /home/basti/Sync/backgroundrateonlylnL0.8mlp0.85.pdf [WARNING]: Printing total background time currently only supported for single datasets.
So at this point the effective efficiency is pretty much 80%. This means the network is significantly better at lower energies (aside from first bin), but essentially the same everywhere else. Pretty interesting. Question is how it fares at higher signal efficiencies!
1.35.1. Running likelihood
on SGD trained network after 500k epochs
Using 95% signal efficiency: Run-2:
likelihood -f \ ~/CastData/data/DataRuns2017_Reco.h5 \ --h5out ~/Sync/run2_17_04_23_mlp_local_0.95_500k.h5 \ --region crGold \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --mlp ~/org/resources/nn_devel_mixing/17_04_23_sgd_gauss_diffusion/trained_mlp_sgd_gauss_diffusioncheckpoint_epoch_500000_loss_0.0106_acc_0.9977.pt \ --nnSignalEff 0.95 \ --nnCutKind runLocal \ --calibFile ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --readonly
likelihood -f \ ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out ~/Sync/run3_17_04_23_mlp_local_0.95_500k.h5 \ --region crGold \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --mlp ~/org/resources/nn_devel_mixing/17_04_23_sgd_gauss_diffusion/trained_mlp_sgd_gauss_diffusioncheckpoint_epoch_500000_loss_0.0106_acc_0.9977.pt \ --nnSignalEff 0.95 \ --nnCutKind runLocal \ --calibFile ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --readonly
Yields the background:
plotBackgroundRate \ ~/Sync/run2_17_04_23_mlp_local_0.95_500k.h5 \ ~/Sync/run3_17_04_23_mlp_local_0.95_500k.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold.h5 \ --names "MLP@95" \ --names "MLP@95" \ --names "LnL@80" \ --names "LnL@80" \ --centerChip 3 \ --title "Background rate from CAST data, LnL@80%, MLP@95%" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_only_lnL_0.8_mlp_0.95.pdf \ --outpath ~/Sync/ \ --quiet
[INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.3288e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.9406e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 3.0605e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 2.5504e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.2088e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.1044e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 4.3972e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 2.1986e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1626e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5836e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1222e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.4937e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.9703e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.5881e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 1.2224e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 4.8897e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.6383e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.5958e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 3.9927e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 9.9816e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.6551e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.0689e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 2.2267e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.7834e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.1788e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.3631e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 1.0465e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.7442e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:DataFrame with 7 columns and 116 rows: Idx Energy Rate totalTime RateErr Dataset yMin yMax dtype: float float constant float string float float 0 0.4 5.98 3159 1.026 LnL@80 4.955 7.006 1 0.6 6.332 3159 1.055 LnL@80 5.277 7.387 2 0.8 5.98 3159 1.026 LnL@80 4.955 7.006 3 1 5.98 3159 1.026 LnL@80 4.955 7.006 4 1.2 3.166 3159 0.7462 LnL@80 2.42 3.912 5 1.4 2.99 3159 0.7252 LnL@80 2.265 3.715 6 1.6 2.814 3159 0.7036 LnL@80 2.111 3.518 7 1.8 3.166 3159 0.7462 LnL@80 2.42 3.912 8 2 1.583 3159 0.5277 LnL@80 1.055 2.111 9 2.2 1.759 3159 0.5562 LnL@80 1.203 2.315 10 2.4 0.8794 3159 0.3933 LnL@80 0.4861 1.273 11 2.6 1.583 3159 0.5277 LnL@80 1.055 2.111 12 2.8 2.287 3159 0.6342 LnL@80 1.652 2.921 13 3 5.101 3159 0.9472 LnL@80 4.154 6.048 14 3.2 5.277 3159 0.9634 LnL@80 4.313 6.24 15 3.4 4.221 3159 0.8617 LnL@80 3.36 5.083 16 3.6 3.342 3159 0.7667 LnL@80 2.575 4.109 17 3.8 2.111 3159 0.6093 LnL@80 1.501 2.72 18 4 0.7036 3159 0.3518 LnL@80 0.3518 1.055 19 4.2 1.055 3159 0.4308 LnL@80 0.6245 1.486
[INFO]:INFO: storing plot in /home/basti/Sync/backgroundrateonlylnL0.8mlp0.95.pdf [WARNING]: Printing total background time currently only supported for single datasets.
Comparing the now 85% at 500k:
plotBackgroundRate \ ~/Sync/run2_17_04_23_mlp_local_0.85_500k.h5 \ ~/Sync/run3_17_04_23_mlp_local_0.85_500k.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold.h5 \ --names "MLP@85" \ --names "MLP@85" \ --names "LnL@80" \ --names "LnL@80" \ --centerChip 3 \ --title "Background rate from CAST data, LnL@80%, MLP@85% 500k" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_only_lnL_0.8_mlp_0.85_500k.pdf \ --outpath ~/Sync/ \ --quiet
[INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.3288e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.9406e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.4730e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 2.0608e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.2088e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.1044e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 2.7087e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 1.3543e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1626e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5836e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 8.4954e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 1.8879e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.9703e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.5881e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 9.3221e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.7288e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.6383e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.5958e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.8670e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 7.1674e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.6551e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.0689e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.7413e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.1766e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.1788e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.3631e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.4778e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.4130e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:DataFrame with 7 columns and 116 rows: Idx Energy Rate totalTime RateErr Dataset yMin yMax dtype: float float constant float string float float 0 0.4 5.98 3159 1.026 LnL@80 4.955 7.006 1 0.6 6.332 3159 1.055 LnL@80 5.277 7.387 2 0.8 5.98 3159 1.026 LnL@80 4.955 7.006 3 1 5.98 3159 1.026 LnL@80 4.955 7.006 4 1.2 3.166 3159 0.7462 LnL@80 2.42 3.912 5 1.4 2.99 3159 0.7252 LnL@80 2.265 3.715 6 1.6 2.814 3159 0.7036 LnL@80 2.111 3.518 7 1.8 3.166 3159 0.7462 LnL@80 2.42 3.912 8 2 1.583 3159 0.5277 LnL@80 1.055 2.111 9 2.2 1.759 3159 0.5562 LnL@80 1.203 2.315 10 2.4 0.8794 3159 0.3933 LnL@80 0.4861 1.273 11 2.6 1.583 3159 0.5277 LnL@80 1.055 2.111 12 2.8 2.287 3159 0.6342 LnL@80 1.652 2.921 13 3 5.101 3159 0.9472 LnL@80 4.154 6.048 14 3.2 5.277 3159 0.9634 LnL@80 4.313 6.24 15 3.4 4.221 3159 0.8617 LnL@80 3.36 5.083 16 3.6 3.342 3159 0.7667 LnL@80 2.575 4.109 17 3.8 2.111 3159 0.6093 LnL@80 1.501 2.72 18 4 0.7036 3159 0.3518 LnL@80 0.3518 1.055 19 4.2 1.055 3159 0.4308 LnL@80 0.6245 1.486
[INFO]:INFO: storing plot in /home/basti/Sync/backgroundrateonlylnL0.8mlp0.85500k.pdf [WARNING]: Printing total background time currently only supported for single datasets.
Phew, can it be any more similar?
Now we're also running 99% for the 500k epoch version.
plotBackgroundRate \ ~/Sync/run2_17_04_23_mlp_local_0.99_500k.h5 \ ~/Sync/run3_17_04_23_mlp_local_0.99_500k.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold.h5 \ --names "MLP@99" \ --names "MLP@99" \ --names "LnL@80" \ --names "LnL@80" \ --centerChip 3 \ --title "Background rate from CAST data, LnL@80%, MLP@99% 500k" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_only_lnL_0.8_mlp_0.99_500k.pdf \ --outpath ~/Sync/ \ --quiet
[INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.3288e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.9406e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 3.8150e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 3.1792e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.2088e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.1044e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 7.7918e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.8959e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1626e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5836e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.5707e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 3.4904e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.9703e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.5881e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 1.7219e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 6.8878e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.6383e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.5958e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 4.9249e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 1.2312e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.6551e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.0689e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 2.8951e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 3.6189e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.1788e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.3631e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 1.2418e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 2.0696e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:DataFrame with 7 columns and 116 rows: Idx Energy Rate totalTime RateErr Dataset yMin yMax dtype: float float constant float string float float 0 0.4 5.98 3159 1.026 LnL@80 4.955 7.006 1 0.6 6.332 3159 1.055 LnL@80 5.277 7.387 2 0.8 5.98 3159 1.026 LnL@80 4.955 7.006 3 1 5.98 3159 1.026 LnL@80 4.955 7.006 4 1.2 3.166 3159 0.7462 LnL@80 2.42 3.912 5 1.4 2.99 3159 0.7252 LnL@80 2.265 3.715 6 1.6 2.814 3159 0.7036 LnL@80 2.111 3.518 7 1.8 3.166 3159 0.7462 LnL@80 2.42 3.912 8 2 1.583 3159 0.5277 LnL@80 1.055 2.111 9 2.2 1.759 3159 0.5562 LnL@80 1.203 2.315 10 2.4 0.8794 3159 0.3933 LnL@80 0.4861 1.273 11 2.6 1.583 3159 0.5277 LnL@80 1.055 2.111 12 2.8 2.287 3159 0.6342 LnL@80 1.652 2.921 13 3 5.101 3159 0.9472 LnL@80 4.154 6.048 14 3.2 5.277 3159 0.9634 LnL@80 4.313 6.24 15 3.4 4.221 3159 0.8617 LnL@80 3.36 5.083 16 3.6 3.342 3159 0.7667 LnL@80 2.575 4.109 17 3.8 2.111 3159 0.6093 LnL@80 1.501 2.72 18 4 0.7036 3159 0.3518 LnL@80 0.3518 1.055 19 4.2 1.055 3159 0.4308 LnL@80 0.6245 1.486
[INFO]:INFO: storing plot in /home/basti/Sync/backgroundrateonlylnL0.8mlp0.99500k.pdf [WARNING]: Printing total background time currently only supported for single datasets.
Effective efficiency at 99% with 500k:
./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/17_04_23_sgd_gauss_diffusion/trained_mlp_sgd_gauss_diffusioncheckpoint_epoch_500000_loss_0.0106_acc_0.9977.pt \ --ε 0.99 \ --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit \ --plotDatasets \ --plotPath ~/Sync/run2_run3_gradient_descent_eff_0.99/
yields: Wow, so it actually works better than at 80%!
1.35.2. Training again with AdamW
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/21_04_23_adamW_gauss_diffusion/trained_mlp_adamW_gauss_diffusion.pt \ --plotPath ~/Sync/21_04_23_adamW_sim_diffusion_gain/ \ --numHidden 2500 \ --datasets eccentricity \ --datasets skewnessLongitudinal \ --datasets skewnessTransverse \ --datasets kurtosisLongitudinal \ --datasets kurtosisTransverse \ --datasets length \ --datasets width \ --datasets rmsLongitudinal \ --datasets rmsTransverse \ --datasets lengthDivRmsTrans \ --datasets rotationAngle \ --datasets fractionInTransverseRms \ --datasets σT \ --learningRate 3e-4 \ --simulatedData
After the training I started another 100k epochs!
Running the likelihood with the trained network (at 85%):
likelihood -f ~/CastData/data/DataRuns2017_Reco.h5 --h5out ~/Sync/run2_21_04_23_adamW_local_0.85.h5 --region crGold --cdlYear 2018 --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 --mlp ~/org/resources/nn_devel_mixing/21_04_23_adamW_gauss_diffusion/trained_mlp_adamW_gauss_diffusion.pt --nnSignalEff 0.85 --nnCutKind runLocal --calibFile ~/CastData/data/CalibrationRuns2017_Reco.h5 --readonly likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 --h5out ~/Sync/run3_21_04_23_adamW_local_0.85.h5 --region crGold --cdlYear 2018 --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 --mlp ~/org/resources/nn_devel_mixing/21_04_23_adamW_gauss_diffusion/trained_mlp_adamW_gauss_diffusion.pt --nnSignalEff 0.85 --nnCutKind runLocal --calibFile ~/CastData/data/CalibrationRuns2018_Reco.h5 --readonly
and plotting:
plotBackgroundRate \ ~/Sync/run2_21_04_23_adamW_local_0.85.h5 \ ~/Sync/run3_21_04_23_adamW_local_0.85.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold.h5 \ --names "MLP@85" \ --names "MLP@85" \ --names "LnL@80" \ --names "LnL@80" \ --centerChip 3 \ --title "Background rate from CAST data, LnL@80%, AdamW MLP@85%" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_only_lnL_0.8_adamW_mlp_0.85.pdf \ --outpath ~/Sync/ \ --quiet
[INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.3288e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.9406e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 1.9365e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.6138e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.2088e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.1044e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 3.6409e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 1.8204e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1626e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5836e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 8.5130e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 1.8918e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.9703e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.5881e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.0029e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.2012e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.6383e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.5958e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.8142e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 7.0355e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.6551e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.0689e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.5074e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 1.8842e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.1788e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.3631e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 7.5984e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.2664e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:DataFrame with 7 columns and 116 rows: Idx Energy Rate totalTime RateErr Dataset yMin yMax dtype: float float constant float string float float 0 0.4 5.98 3159 1.026 LnL@80 4.955 7.006 1 0.6 6.332 3159 1.055 LnL@80 5.277 7.387 2 0.8 5.98 3159 1.026 LnL@80 4.955 7.006 3 1 5.98 3159 1.026 LnL@80 4.955 7.006 4 1.2 3.166 3159 0.7462 LnL@80 2.42 3.912 5 1.4 2.99 3159 0.7252 LnL@80 2.265 3.715 6 1.6 2.814 3159 0.7036 LnL@80 2.111 3.518 7 1.8 3.166 3159 0.7462 LnL@80 2.42 3.912 8 2 1.583 3159 0.5277 LnL@80 1.055 2.111 9 2.2 1.759 3159 0.5562 LnL@80 1.203 2.315 10 2.4 0.8794 3159 0.3933 LnL@80 0.4861 1.273 11 2.6 1.583 3159 0.5277 LnL@80 1.055 2.111 12 2.8 2.287 3159 0.6342 LnL@80 1.652 2.921 13 3 5.101 3159 0.9472 LnL@80 4.154 6.048 14 3.2 5.277 3159 0.9634 LnL@80 4.313 6.24 15 3.4 4.221 3159 0.8617 LnL@80 3.36 5.083 16 3.6 3.342 3159 0.7667 LnL@80 2.575 4.109 17 3.8 2.111 3159 0.6093 LnL@80 1.501 2.72 18 4 0.7036 3159 0.3518 LnL@80 0.3518 1.055 19 4.2 1.055 3159 0.4308 LnL@80 0.6245 1.486
[INFO]:INFO: storing plot in /home/basti/Sync/backgroundrateonlylnL0.8adamWmlp0.85.pdf [WARNING]: Printing total background time currently only supported for single datasets.
This network now needs an effective efficiency:
./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/21_04_23_adamW_gauss_diffusion/trained_mlp_adamW_gauss_diffusion.pt \ --ε 0.85 \ --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit \ --plotDatasets \ --plotPath ~/Sync/run2_run3_adamW_gradient_descent_eff_0.85/
Ouch, while the mean is good, the variance is horrific!
Maybe this is related to the horribly bad loss? The accuracy is still good, but the loss (that we optimize for after all) is crap.
And for 95%:
plotBackgroundRate \ ~/Sync/run2_21_04_23_adamW_local_0.95.h5 \ ~/Sync/run3_21_04_23_adamW_local_0.95.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold.h5 \ --names "MLP@95" \ --names "MLP@95" \ --names "LnL@80" \ --names "LnL@80" \ --centerChip 3 \ --title "Background rate from CAST data, LnL@80%, AdamW MLP@95%" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_only_lnL_0.8_adamW_mlp_0.95.pdf \ --outpath ~/Sync/ \ --quiet
[INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.3288e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.9406e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.4660e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 2.0550e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.2088e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.1044e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 4.7842e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 2.3921e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1626e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5836e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.0694e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.3764e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.9703e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.5881e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 1.0501e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 4.2002e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.6383e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.5958e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 3.0956e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 7.7391e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.6551e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.0689e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.8820e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.3525e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.1788e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.3631e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.8823e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.4804e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:DataFrame with 7 columns and 116 rows: Idx Energy Rate totalTime RateErr Dataset yMin yMax dtype: float float constant float string float float 0 0.4 5.98 3159 1.026 LnL@80 4.955 7.006 1 0.6 6.332 3159 1.055 LnL@80 5.277 7.387 2 0.8 5.98 3159 1.026 LnL@80 4.955 7.006 3 1 5.98 3159 1.026 LnL@80 4.955 7.006 4 1.2 3.166 3159 0.7462 LnL@80 2.42 3.912 5 1.4 2.99 3159 0.7252 LnL@80 2.265 3.715 6 1.6 2.814 3159 0.7036 LnL@80 2.111 3.518 7 1.8 3.166 3159 0.7462 LnL@80 2.42 3.912 8 2 1.583 3159 0.5277 LnL@80 1.055 2.111 9 2.2 1.759 3159 0.5562 LnL@80 1.203 2.315 10 2.4 0.8794 3159 0.3933 LnL@80 0.4861 1.273 11 2.6 1.583 3159 0.5277 LnL@80 1.055 2.111 12 2.8 2.287 3159 0.6342 LnL@80 1.652 2.921 13 3 5.101 3159 0.9472 LnL@80 4.154 6.048 14 3.2 5.277 3159 0.9634 LnL@80 4.313 6.24 15 3.4 4.221 3159 0.8617 LnL@80 3.36 5.083 16 3.6 3.342 3159 0.7667 LnL@80 2.575 4.109 17 3.8 2.111 3159 0.6093 LnL@80 1.501 2.72 18 4 0.7036 3159 0.3518 LnL@80 0.3518 1.055 19 4.2 1.055 3159 0.4308 LnL@80 0.6245 1.486
[INFO]:INFO: storing plot in /home/basti/Sync/backgroundrateonlylnL0.8adamWmlp0.95.pdf [WARNING]: Printing total background time currently only supported for single datasets.
1.35.3. Training an SGD network including total charge
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/21_04_23_sgd_gauss_diffusion/trained_mlp_sgd_gauss_diffusion.pt \ --plotPath ~/Sync/21_04_23_sgd_sim_diffusion_gain/ \ --numHidden 2500 \ --datasets eccentricity \ --datasets skewnessLongitudinal \ --datasets skewnessTransverse \ --datasets kurtosisLongitudinal \ --datasets kurtosisTransverse \ --datasets length \ --datasets width \ --datasets rmsLongitudinal \ --datasets rmsTransverse \ --datasets lengthDivRmsTrans \ --datasets rotationAngle \ --datasets fractionInTransverseRms \ --datasets totalCharge \ --datasets σT \ --learningRate 7e-4 \ --simulatedData
Note the larger learning rate, due to the very slow convergence before!
Trained for 400k epochs using learning rate 7e-4, then lowered to 3e-4 for another 100k.
Effective efficiency at 80% for this network:
./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/21_04_23_sgd_gauss_diffusion/trained_mlp_sgd_gauss_diffusioncheckpoint_epoch_500000_loss_0.0084_acc_0.9981.pt \ --ε 0.80 \ --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit \ --plotDatasets \ --plotPath ~/Sync/run2_run3_21_04_23_sgd_gradient_descent_eff_0.8/
So generally a good 7.5% too low. ~72.5% real efficiency compared to 80% desired. This does make some sense as the difference between charges is still quite different from what we target, I suppose. Maybe we should look into that again to see if we can get better alignment there?
Despite the low efficiency, let's look at the background rate of this case:
plotBackgroundRate \ ~/Sync/run2_21_04_23_mlp_local_0.80_500k.h5 \ ~/Sync/run3_21_04_23_mlp_local_0.80_500k.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold.h5 \ --names "MLP@80" \ --names "MLP@80" \ --names "LnL@80" \ --names "LnL@80" \ --centerChip 3 \ --title "Background rate from CAST data, LnL@80%, SGD w/ charge MLP@80%" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_only_lnL_0.8_sgd_21_04_23_mlp_0.8.pdf \ --outpath ~/Sync/ \ --quiet
[INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.3288e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.9406e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@80 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 1.9664e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.6387e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.2088e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.1044e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@80 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 1.9524e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 9.7618e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1626e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5836e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@80 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 7.3521e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 1.6338e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.9703e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.5881e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@80 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 6.2440e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 2.4976e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.6383e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.5958e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@80 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.2514e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 5.6284e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.6551e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.0689e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@80 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.3438e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 1.6797e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.1788e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.3631e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@80 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 7.4401e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.2400e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:DataFrame with 7 columns and 116 rows: Idx Energy Rate totalTime RateErr Dataset yMin yMax dtype: float float constant float string float float 0 0.4 5.98 3159 1.026 LnL@80 4.955 7.006 1 0.6 6.332 3159 1.055 LnL@80 5.277 7.387 2 0.8 5.98 3159 1.026 LnL@80 4.955 7.006 3 1 5.98 3159 1.026 LnL@80 4.955 7.006 4 1.2 3.166 3159 0.7462 LnL@80 2.42 3.912 5 1.4 2.99 3159 0.7252 LnL@80 2.265 3.715 6 1.6 2.814 3159 0.7036 LnL@80 2.111 3.518 7 1.8 3.166 3159 0.7462 LnL@80 2.42 3.912 8 2 1.583 3159 0.5277 LnL@80 1.055 2.111 9 2.2 1.759 3159 0.5562 LnL@80 1.203 2.315 10 2.4 0.8794 3159 0.3933 LnL@80 0.4861 1.273 11 2.6 1.583 3159 0.5277 LnL@80 1.055 2.111 12 2.8 2.287 3159 0.6342 LnL@80 1.652 2.921 13 3 5.101 3159 0.9472 LnL@80 4.154 6.048 14 3.2 5.277 3159 0.9634 LnL@80 4.313 6.24 15 3.4 4.221 3159 0.8617 LnL@80 3.36 5.083 16 3.6 3.342 3159 0.7667 LnL@80 2.575 4.109 17 3.8 2.111 3159 0.6093 LnL@80 1.501 2.72 18 4 0.7036 3159 0.3518 LnL@80 0.3518 1.055 19 4.2 1.055 3159 0.4308 LnL@80 0.6245 1.486
[INFO]:INFO: storing plot in /home/basti/Sync/backgroundrateonlylnL0.8sgd210423mlp0.8.pdf [WARNING]: Printing total background time currently only supported for single datasets.
But let's check 95% and see where we end up thered
./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/21_04_23_sgd_gauss_diffusion/trained_mlp_sgd_gauss_diffusioncheckpoint_epoch_500000_loss_0.0084_acc_0.9981.pt \ --ε 0.95 \ --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit \ --plotDatasets \ --plotPath ~/Sync/run2_run3_21_04_23_sgd_gradient_descent_eff_0.95/
Interesting! At this efficiency the match is much better than at 80%.
./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/21_04_23_sgd_gauss_diffusion/trained_mlp_sgd_gauss_diffusioncheckpoint_epoch_500000_loss_0.0084_acc_0.9981.pt \ --ε 0.99 \ --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit \ --plotDatasets \ --plotPath ~/Sync/run2_run3_21_04_23_sgd_gradient_descent_eff_0.99/
And at 98% it's even better! Nice.
What does the background look like at 95% and 99%?
plotBackgroundRate \ ~/Sync/run2_21_04_23_mlp_local_0.95_500k.h5 \ ~/Sync/run3_21_04_23_mlp_local_0.95_500k.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold.h5 \ --names "MLP@95" \ --names "MLP@95" \ --names "LnL@80" \ --names "LnL@80" \ --centerChip 3 \ --title "Background rate from CAST data, LnL@80%, SGD w/ charge MLP@95%" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_only_lnL_0.8_sgd_21_04_23_mlp_0.95.pdf \ --outpath ~/Sync/ \ --quiet
[INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.3288e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.9406e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.9180e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 2.4317e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.2088e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.1044e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 3.9399e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 1.9699e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1626e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5836e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.0465e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.3256e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.9703e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.5881e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 1.1521e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 4.6083e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.6383e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.5958e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 3.5705e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 8.9263e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.6551e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.0689e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 2.0948e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.6185e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.1788e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.3631e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 9.8146e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.6358e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:DataFrame with 7 columns and 116 rows: Idx Energy Rate totalTime RateErr Dataset yMin yMax dtype: float float constant float string float float 0 0.4 5.98 3159 1.026 LnL@80 4.955 7.006 1 0.6 6.332 3159 1.055 LnL@80 5.277 7.387 2 0.8 5.98 3159 1.026 LnL@80 4.955 7.006 3 1 5.98 3159 1.026 LnL@80 4.955 7.006 4 1.2 3.166 3159 0.7462 LnL@80 2.42 3.912 5 1.4 2.99 3159 0.7252 LnL@80 2.265 3.715 6 1.6 2.814 3159 0.7036 LnL@80 2.111 3.518 7 1.8 3.166 3159 0.7462 LnL@80 2.42 3.912 8 2 1.583 3159 0.5277 LnL@80 1.055 2.111 9 2.2 1.759 3159 0.5562 LnL@80 1.203 2.315 10 2.4 0.8794 3159 0.3933 LnL@80 0.4861 1.273 11 2.6 1.583 3159 0.5277 LnL@80 1.055 2.111 12 2.8 2.287 3159 0.6342 LnL@80 1.652 2.921 13 3 5.101 3159 0.9472 LnL@80 4.154 6.048 14 3.2 5.277 3159 0.9634 LnL@80 4.313 6.24 15 3.4 4.221 3159 0.8617 LnL@80 3.36 5.083 16 3.6 3.342 3159 0.7667 LnL@80 2.575 4.109 17 3.8 2.111 3159 0.6093 LnL@80 1.501 2.72 18 4 0.7036 3159 0.3518 LnL@80 0.3518 1.055 19 4.2 1.055 3159 0.4308 LnL@80 0.6245 1.486
[INFO]:INFO: storing plot in /home/basti/Sync/backgroundrateonlylnL0.8sgd210423mlp0.95.pdf [WARNING]: Printing total background time currently only supported for single datasets.
plotBackgroundRate \ ~/Sync/run2_21_04_23_mlp_local_0.99_500k.h5 \ ~/Sync/run3_21_04_23_mlp_local_0.99_500k.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold.h5 \ --names "MLP@99" \ --names "MLP@99" \ --names "LnL@80" \ --names "LnL@80" \ --centerChip 3 \ --title "Background rate from CAST data, LnL@80%, SGD w/ charge MLP@99%" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_only_lnL_0.8_sgd_21_04_23_mlp_0.99.pdf \ --outpath ~/Sync/ \ --quiet
[INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.3288e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.9406e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 3.5705e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 2.9754e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.2088e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.1044e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.7541e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.3771e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1626e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5836e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.4352e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 3.1894e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.9703e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.5881e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 1.5707e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 6.2827e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.6383e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.5958e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 4.4851e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 1.1213e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.6551e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.0689e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 2.6823e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 3.3529e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.1788e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.3631e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 1.1679e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.9465e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:DataFrame with 7 columns and 116 rows: Idx Energy Rate totalTime RateErr Dataset yMin yMax dtype: float float constant float string float float 0 0.4 5.98 3159 1.026 LnL@80 4.955 7.006 1 0.6 6.332 3159 1.055 LnL@80 5.277 7.387 2 0.8 5.98 3159 1.026 LnL@80 4.955 7.006 3 1 5.98 3159 1.026 LnL@80 4.955 7.006 4 1.2 3.166 3159 0.7462 LnL@80 2.42 3.912 5 1.4 2.99 3159 0.7252 LnL@80 2.265 3.715 6 1.6 2.814 3159 0.7036 LnL@80 2.111 3.518 7 1.8 3.166 3159 0.7462 LnL@80 2.42 3.912 8 2 1.583 3159 0.5277 LnL@80 1.055 2.111 9 2.2 1.759 3159 0.5562 LnL@80 1.203 2.315 10 2.4 0.8794 3159 0.3933 LnL@80 0.4861 1.273 11 2.6 1.583 3159 0.5277 LnL@80 1.055 2.111 12 2.8 2.287 3159 0.6342 LnL@80 1.652 2.921 13 3 5.101 3159 0.9472 LnL@80 4.154 6.048 14 3.2 5.277 3159 0.9634 LnL@80 4.313 6.24 15 3.4 4.221 3159 0.8617 LnL@80 3.36 5.083 16 3.6 3.342 3159 0.7667 LnL@80 2.575 4.109 17 3.8 2.111 3159 0.6093 LnL@80 1.501 2.72 18 4 0.7036 3159 0.3518 LnL@80 0.3518 1.055 19 4.2 1.055 3159 0.4308 LnL@80 0.6245 1.486
[INFO]:INFO: storing plot in /home/basti/Sync/backgroundrateonlylnL0.8sgd210423mlp0.99.pdf [WARNING]: Printing total background time currently only supported for single datasets.
1.35.4. Investigate huge loss values for AdamW and sometimes SGD
Let's see what happens there. Fortunately have a network that reproduces it!
Ah, but the loss is directly the output of the sigmoid_cross_entropy
function…
Chatting with BingChat helped understand some things, but didn't
really answer why it might lead to a big loss.
Instead of our current forward definition:
var x = net.hidden.forward(x).relu() return net.classifier.forward(x).squeeze(1)
we can use:
var x = net.hidden.forward(x).relu() return net.classifier.forward(x).tanh().squeeze(1)
which then should make using something like MSEloss or L1loss more stable I think. Could be worth a try.
1.35.5. TODO Train a network with hits
as an input?
Maybe that helps a lot?
-> We're currently training another net with total charge as input.
1.35.6. TODO Can we train a network that focuses on…
separating background and calibration data as far away as possible?
But then again this is what the loss effectively does anyway, no?
The question is how does BCE loss actually work. Need to understand that. Does it penalize "more or less wrong" things? Or does it focus on "correct" vs "wrong"?
1.35.7. TODO Network with 2 hidden layers
Two very small ones:
- 100 neurons with tanh
- 100 neurons with gelu activation
I also tried ELU and Leaky_ReLU
, but they gave only NaNs in the
loss. No idea why.
Still maybe try something with tanh/sigmoid on output layer and MSE loss.
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/22_04_23_sgd_gauss_diffusion_2hidden/trained_mlp_sgd_gauss_diffusion_2hidden.pt \ --plotPath ~/Sync/22_04_23_sgd_sim_diffusion_gain_2hidden/ \ --numHidden 100 \ --datasets eccentricity \ --datasets skewnessLongitudinal \ --datasets skewnessTransverse \ --datasets kurtosisLongitudinal \ --datasets kurtosisTransverse \ --datasets length \ --datasets width \ --datasets rmsLongitudinal \ --datasets rmsTransverse \ --datasets lengthDivRmsTrans \ --datasets rotationAngle \ --datasets fractionInTransverseRms \ --datasets totalCharge \ --datasets σT \ --learningRate 7e-4 \ --simulatedData
We continue training this network for 100k epochs more. Using 100k simulated events now instead of 30k.
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/22_04_23_sgd_gauss_diffusion_2hidden/trained_mlp_sgd_gauss_diffusion_2hidden.pt \ --plotPath ~/Sync/22_04_23_sgd_sim_diffusion_gain_2hidden/ \ --numHidden 100 \ --datasets eccentricity \ --datasets skewnessLongitudinal \ --datasets skewnessTransverse \ --datasets kurtosisLongitudinal \ --datasets kurtosisTransverse \ --datasets length \ --datasets width \ --datasets rmsLongitudinal \ --datasets rmsTransverse \ --datasets lengthDivRmsTrans \ --datasets rotationAngle \ --datasets fractionInTransverseRms \ --datasets totalCharge \ --datasets σT \ --learningRate 7e-4 \ --simulatedData
Now repeat tanh network, but with all training data & 1000 hidden neurons each:
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/22_04_23_sgd_gauss_diffusion_2hidden_tanh1000/trained_mlp_sgd_gauss_diffusion_2hidden_tanh1000.pt \ --plotPath ~/Sync/22_04_23_sgd_sim_diffusion_gain_2hidden_tanh1000/ \ --numHidden 1000 \ --datasets eccentricity \ --datasets skewnessLongitudinal \ --datasets skewnessTransverse \ --datasets kurtosisLongitudinal \ --datasets kurtosisTransverse \ --datasets length \ --datasets width \ --datasets rmsLongitudinal \ --datasets rmsTransverse \ --datasets lengthDivRmsTrans \ --datasets rotationAngle \ --datasets fractionInTransverseRms \ --datasets totalCharge \ --datasets σT \ --learningRate 7e-4 \ --simulatedData
1.35.8. Effective efficiency with escape photon data
Back to the effective efficiency: is the plot including the effective efficiency based on the escape peak data in the CAST data. It used:
./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/17_04_23_sgd_gauss_diffusion/trained_mlp_sgd_gauss_diffusioncheckpoint_epoch_400000_loss_0.0115_acc_0.9975.pt \ --ε 0.8 \ --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit \ --plotDatasets \ --plotPath ~/Sync/run2_run3_incl_escape_photons
So unfortunately the values for the escape data are actually even a bit lower. This is working by generated 3.0 keV data with the absorption length of the 5.9 keV data. Surely the equivalence is not perfect.
1.36.
Wrote a small helper studyMLP
that runs:
- effective efficiency
- likelihood for 2017 and 2018 data
for a list of target efficiencies for a given MLP model.
./studyMLP 0.85 0.95 0.99 \ --model ~/org/resources/nn_devel_mixing/22_04_23_sgd_gauss_diffusion_2hidden/trained_mlp_sgd_gauss_diffusion_2hiddencheckpoint_epoch_400000_loss_0.0067_acc_0.9984.pt \ --plotPath ~/Sync/22_04_23_sgd_sim_diffusion_gain_2hidden/ \ --h5out ~/Sync/22_04_23_tanh_hidden2_100.h5
Let's see!
Hmm, we have some issue that the likelihood output files are on the
order of 1.5 - 2.5 GB!
Investigating…
Ah: I didn't recompile likelihood
, so it still used the single layer
MLP instead of the two layer. Therefore the weights obviously didn't
properly match.
The above ran properly now and produced the following effective efficiency plots:
Background rate tanh 2 layer 100 neurons 85%:
plotBackgroundRate ~/Sync/22_04_23_tanh_hidden2_100eff_0.85_run2.h5 ~/Sync/22_04_23_tanh_hidden2_100eff_0.85_run3.h5 ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold.h5 ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold.h5 --names "MLP@85" --names "MLP@85" --names "LnL@80" --names "LnL@80" --centerChip 3 --title "Background rate from CAST data, LnL@80%, SGD tanh100 MLP@85%" --showNumClusters --showTotalTime --topMargin 1.5 --energyDset energyFromCharge --outfile background_rate_only_lnL_0.8_sgd_tanh100_22_04_23_mlp_0.85.pdf --outpath ~/Sync/ --quiet
[INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.3288e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.9406e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 1.9594e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.6328e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.2088e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.1044e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 2.0579e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 1.0289e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1626e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5836e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 7.4577e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 1.6573e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.9703e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.5881e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 5.5405e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 2.2162e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.6383e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.5958e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.5328e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.3320e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.6551e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.0689e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.3033e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 1.6292e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.1788e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.3631e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 7.7918e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.2986e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:DataFrame with 7 columns and 116 rows: Idx Energy Rate totalTime RateErr Dataset yMin yMax dtype: float float constant float string float float 0 0.4 5.98 3159 1.026 LnL@80 4.955 7.006 1 0.6 6.332 3159 1.055 LnL@80 5.277 7.387 2 0.8 5.98 3159 1.026 LnL@80 4.955 7.006 3 1 5.98 3159 1.026 LnL@80 4.955 7.006 4 1.2 3.166 3159 0.7462 LnL@80 2.42 3.912 5 1.4 2.99 3159 0.7252 LnL@80 2.265 3.715 6 1.6 2.814 3159 0.7036 LnL@80 2.111 3.518 7 1.8 3.166 3159 0.7462 LnL@80 2.42 3.912 8 2 1.583 3159 0.5277 LnL@80 1.055 2.111 9 2.2 1.759 3159 0.5562 LnL@80 1.203 2.315 10 2.4 0.8794 3159 0.3933 LnL@80 0.4861 1.273 11 2.6 1.583 3159 0.5277 LnL@80 1.055 2.111 12 2.8 2.287 3159 0.6342 LnL@80 1.652 2.921 13 3 5.101 3159 0.9472 LnL@80 4.154 6.048 14 3.2 5.277 3159 0.9634 LnL@80 4.313 6.24 15 3.4 4.221 3159 0.8617 LnL@80 3.36 5.083 16 3.6 3.342 3159 0.7667 LnL@80 2.575 4.109 17 3.8 2.111 3159 0.6093 LnL@80 1.501 2.72 18 4 0.7036 3159 0.3518 LnL@80 0.3518 1.055 19 4.2 1.055 3159 0.4308 LnL@80 0.6245 1.486
[INFO]:INFO: storing plot in /home/basti/Sync/backgroundrateonlylnL0.8sgdtanh100220423mlp0.85.pdf [WARNING]: Printing total background time currently only supported for single datasets.
plotBackgroundRate ~/Sync/22_04_23_tanh_hidden2_100eff_0.95_run2.h5 ~/Sync/22_04_23_tanh_hidden2_100eff_0.95_run3.h5 ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold.h5 ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold.h5 --names "MLP@95" --names "MLP@95" --names "LnL@80" --names "LnL@80" --centerChip 3 --title "Background rate from CAST data, LnL@80%, SGD tanh100 MLP@95%" --showNumClusters --showTotalTime --topMargin 1.5 --energyDset energyFromCharge --outfile background_rate_only_lnL_0.8_sgd_tanh100_22_04_23_mlp_0.95.pdf --outpath ~/Sync/ --quiet
[INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.3288e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.9406e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.6278e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 2.1898e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.2088e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.1044e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 3.3771e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 1.6885e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1626e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5836e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 9.8673e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.1927e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.9703e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.5881e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 9.1462e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.6585e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.6383e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.5958e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 3.2715e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 8.1788e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.6551e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.0689e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.8275e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.2843e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.1788e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.3631e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 9.5155e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.5859e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:DataFrame with 7 columns and 116 rows: Idx Energy Rate totalTime RateErr Dataset yMin yMax dtype: float float constant float string float float 0 0.4 5.98 3159 1.026 LnL@80 4.955 7.006 1 0.6 6.332 3159 1.055 LnL@80 5.277 7.387 2 0.8 5.98 3159 1.026 LnL@80 4.955 7.006 3 1 5.98 3159 1.026 LnL@80 4.955 7.006 4 1.2 3.166 3159 0.7462 LnL@80 2.42 3.912 5 1.4 2.99 3159 0.7252 LnL@80 2.265 3.715 6 1.6 2.814 3159 0.7036 LnL@80 2.111 3.518 7 1.8 3.166 3159 0.7462 LnL@80 2.42 3.912 8 2 1.583 3159 0.5277 LnL@80 1.055 2.111 9 2.2 1.759 3159 0.5562 LnL@80 1.203 2.315 10 2.4 0.8794 3159 0.3933 LnL@80 0.4861 1.273 11 2.6 1.583 3159 0.5277 LnL@80 1.055 2.111 12 2.8 2.287 3159 0.6342 LnL@80 1.652 2.921 13 3 5.101 3159 0.9472 LnL@80 4.154 6.048 14 3.2 5.277 3159 0.9634 LnL@80 4.313 6.24 15 3.4 4.221 3159 0.8617 LnL@80 3.36 5.083 16 3.6 3.342 3159 0.7667 LnL@80 2.575 4.109 17 3.8 2.111 3159 0.6093 LnL@80 1.501 2.72 18 4 0.7036 3159 0.3518 LnL@80 0.3518 1.055 19 4.2 1.055 3159 0.4308 LnL@80 0.6245 1.486
[INFO]:INFO: storing plot in /home/basti/Sync/backgroundrateonlylnL0.8sgdtanh100220423mlp0.95.pdf [WARNING]: Printing total background time currently only supported for single datasets.
plotBackgroundRate ~/Sync/22_04_23_tanh_hidden2_100eff_0.99_run2.h5 ~/Sync/22_04_23_tanh_hidden2_100eff_0.99_run3.h5 ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold.h5 ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold.h5 --names "MLP@99" --names "MLP@99" --names "LnL@80" --names "LnL@80" --centerChip 3 --title "Background rate from CAST data, LnL@80%, SGD tanh100 MLP@99%" --showNumClusters --showTotalTime --topMargin 1.5 --energyDset energyFromCharge --outfile background_rate_only_lnL_0.8_sgd_tanh100_22_04_23_mlp_0.99.pdf --outpath ~/Sync/ --quiet
[INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.3288e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.9406e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 3.3911e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 2.8259e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.2088e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.1044e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 5.9450e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 2.9725e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1626e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5836e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.3192e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.9315e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.9703e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.5881e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 1.4599e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 5.8395e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.6383e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.5958e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 4.0630e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 1.0158e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.6551e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.0689e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 2.5082e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 3.1352e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.1788e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.3631e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@99 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 1.0923e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.8204e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:DataFrame with 7 columns and 116 rows: Idx Energy Rate totalTime RateErr Dataset yMin yMax dtype: float float constant float string float float 0 0.4 5.98 3159 1.026 LnL@80 4.955 7.006 1 0.6 6.332 3159 1.055 LnL@80 5.277 7.387 2 0.8 5.98 3159 1.026 LnL@80 4.955 7.006 3 1 5.98 3159 1.026 LnL@80 4.955 7.006 4 1.2 3.166 3159 0.7462 LnL@80 2.42 3.912 5 1.4 2.99 3159 0.7252 LnL@80 2.265 3.715 6 1.6 2.814 3159 0.7036 LnL@80 2.111 3.518 7 1.8 3.166 3159 0.7462 LnL@80 2.42 3.912 8 2 1.583 3159 0.5277 LnL@80 1.055 2.111 9 2.2 1.759 3159 0.5562 LnL@80 1.203 2.315 10 2.4 0.8794 3159 0.3933 LnL@80 0.4861 1.273 11 2.6 1.583 3159 0.5277 LnL@80 1.055 2.111 12 2.8 2.287 3159 0.6342 LnL@80 1.652 2.921 13 3 5.101 3159 0.9472 LnL@80 4.154 6.048 14 3.2 5.277 3159 0.9634 LnL@80 4.313 6.24 15 3.4 4.221 3159 0.8617 LnL@80 3.36 5.083 16 3.6 3.342 3159 0.7667 LnL@80 2.575 4.109 17 3.8 2.111 3159 0.6093 LnL@80 1.501 2.72 18 4 0.7036 3159 0.3518 LnL@80 0.3518 1.055 19 4.2 1.055 3159 0.4308 LnL@80 0.6245 1.486
[INFO]:INFO: storing plot in /home/basti/Sync/backgroundrateonlylnL0.8sgdtanh100220423mlp0.99.pdf [WARNING]: Printing total background time currently only supported for single datasets.
All together:
So clearly we also mainly gain at low energies here. Even at 99% we still are effectively not seeing any degradation in most regions!
1.36.1. Train MLP with sigmoid on last layer and MLE loss!
Changed the MLP forward to:
proc forward*(net: MLP, x: RawTensor): RawTensor = var x = net.hidden.forward(x).tanh() x = net.hidden2.forward(x).tanh() return net.classifier.forward(x).sigmoid()
i.e. a sigmoid
on the output.
Then as loss we use mse_loss
instead of the sigmoid_cross_entropy
.
Let's see!
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/23_04_23_sgd_gauss_diffusion_tanh300_mse_loss/trained_mlp_sgd_gauss_diffusion_tanh300_mse_loss.pt \ --plotPath ~/Sync/23_04_23_sgd_sim_diffusion_gain_tanh300_mse_loss/ \ --numHidden 300 \ --datasets eccentricity \ --datasets skewnessLongitudinal \ --datasets skewnessTransverse \ --datasets kurtosisLongitudinal \ --datasets kurtosisTransverse \ --datasets length \ --datasets width \ --datasets rmsLongitudinal \ --datasets rmsTransverse \ --datasets lengthDivRmsTrans \ --datasets rotationAngle \ --datasets fractionInTransverseRms \ --datasets totalCharge \ --datasets σT \ --learningRate 7e-4 \ --simulatedData
Trained it up to 500k epochs.
First note: as expected due to the Sigmoid on the last layer, the output is indeed between -1 and 1 with very strong separation between the two.
I also then ran studyMLP
for it:
./studyMLP 0.85 0.95 0.99 --model ~/org/resources/nn_devel_mixing/23_04_23_sgd_gauss_diffusion_tanh300_mse_loss/trained_mlp_sgd_gauss_diffusion_tanh300_mse_losscheckpoint_epoch_500000_loss_0.0017_acc_0.9982.pt --plotPath ~/Sync/23_04_23_sgd_sim_diffusion_gain_tanh300_mse_loss --h5out ~/Sync/23_04_23_tanh300_mle_loss.h5
but now that I think about it I don't know if I actually recompiled
both the effective_eff_55fe
and likelihood
programs for the new
activation function. Oops.
Of course, if one uses this network without the sigmoid layer it will still produce output similar to the previous tanh (trained via sigmoid cross entropy loss) network. Rerunning at the moment.
(really need things like optimizer, number of layers, activation functions etc as part of MLPDesc)
1.36.2. DONE Things still todo from today!
[X]
Need to implement number of layers into MLPDesc and handle loading correct network!
1.37. TODO
IMPORTANT:
- See if adjusting the rms transverse values down to the old ones (~1.0 to 1.1 instead of 1.2 - 1.3) gets us closer to the 80% in case of the e.g. 80% network efficiency!!!
1.38.
Finished the implementation of mapping all parameters of interest for
the model layout and training to the MLPDesc
and loading / applying
the correct thing at runtime.
In order to update any of the existing mlp_desc.h5
files, we need to
provide the settings used for each network! At least the new settings
that is.
For example let's update the last tanh
network we trained with a
sigmoid output layer. We do it by also continuing training on it for
another 100k epochs:
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/23_04_23_sgd_gauss_diffusion_tanh300_mse_loss/trained_mlp_sgd_gauss_diffusion_tanh300_mse_loss.pt \ --plotPath ~/Sync/23_04_23_sgd_sim_diffusion_gain_tanh300_mse_loss/ \ --numHidden 300 \ --numHidden 300 \ --activationFunction tanh \ --outputActivation sigmoid \ --lossFunction MSE \ --optimizer SGD \ --learningRate 7e-4 \ --simulatedData \ --continueAfterEpoch 500000
As we have the existing H5 file for the MLPDesc we don't need to supply the datasets. But the additional new fields are required to update them in the file.
We implemented that the code raises an exception if only an old
MLPDesc file is found in effective_eff_fe55
. Also the file name now
contains the version number itself for convenience. Also the version
is another field now that is serialized.
Let's check the following TODO from yesterday:
IMPORTANT:
- See if adjusting the rms transverse values down to the old ones (~1.0 to 1.1 instead of 1.2 - 1.3) gets us closer to the 80% in case of the e.g. 80% network efficiency!!!
-> We'll reset the cuts to their old values and then run the above
network on effective_eff_fe55
./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/23_04_23_sgd_gauss_diffusion_tanh300_mse_loss/trained_mlp_sgd_gauss_diffusion_tanh300_mse_loss.pt \ --ε 0.8 \ --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit \ --plotDatasets \ --plotPath ~/Sync/run2_run3_23_04_23_tanh300_rmsT_old/
-> The plot generally doesn't look too bad, but it's hard to read because there are 2 escape peak values with negative efficiency! Need to fix that.
We'll look at only run 88, which should show that behavior.
Fake data for 3.0 at run 88 and energy 2.98 keV target Ag-Ag-6kV has a cut value 0.9976992011070251 and effective eff -0.01075268817204301
That's fishy. Computed from (kept vs total in that dataset):
Number kept: -1 vs 93
Only 93 escape events? And -1 kept? Ohhh! It's because of this line:
if pred.len < 100: return (-1, @[])
in predictCut
!
Removed that line as it's not very useful nowadays. We still care about the number.
Rerunning and storing in: This looks pretty much like the other case with wider rmsT, no? Let's make sure and change again and run again.
-> Ok, it does improve the situation. The mean is now rather at 75-76 percent compared to 73% or so.
What to make of this? The more rms transverse we allow, the more ugly non X-ray events we will have in our data? Or just that we reproduce the ugly events with our fake data?
1.39.
Next steps:
implement computing the effective efficiencies for:
- the given network
- the desired efficiency
These will also be stored in some form of a cache (given that the calculation takes O(15 min) -> Implemented including caching. Just needs to be called from
likelihood
now.VetoSettings
has the fields required. Should we compute it first or last? ->Compute in-> Not in the init, due to problems with circular imports. The effective eff fields are now filled ininitLikelihoodContext
.likelihood
itself. They are mainly needed for that anyway, so that should be fine.- fix the application of the septem veto etc when using an MLP. Use that to decide if a cluster passes instead of logL -> up next. -> Implemented now!
- automate MLP for
likelihood
increateAllLikelihoodCombinations
- this should be done via a
--mlp
option which takesseq[string]
which are paths to the model file. For each we add afkMLP
flag that will perform a call tolikelihood
as a replacementfkLogL
- this should be done via a
[X]
IMPORTANT: We were still using the oldgetDiffusion
working with the fit for any code usingreadValidDsets
andreadCdlDset
! Means both the diffusion values used in training for the background data as well as for anything with prediction based on real data likely had a wrong diffusion value! FIX and check the impact.
1.40.
Fixed the usage of the old
getDiffusion
inio_helpers
. Now using the correct value from the cache table. Let's see the effect of that on the effective efficiency!./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/23_04_23_sgd_gauss_diffusion_tanh300_mse_loss/trained_mlp_sgd_gauss_diffusion_tanh300_mse_loss.pt \ --ε 0.8 \ --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit \ --plotDatasets \ --plotPath ~/Sync/run2_run3_23_04_23_tanh300_correct_diffusion
The result is pretty much unchanged, if not possibly actually a bit worse than before. Yeah, comparing with the old version it's a bit worse even. :/
We've implemented MLP support into createAllLikelihoodCombinations
now. The result is not super pretty, but it should work. The filename
of the MLP is added to the output file name, leading to very long
filenames. Ideally we'd have a better solution there.
Let's give it a test run, but before we do that, let's run a
likelihood
combining the MLP with FADC, septem and line veto.
Run2:
likelihood \ -f ~/CastData/data/DataRuns2017_Reco.h5 \ --h5out /t/run_2_mlp_all_vetoes.h5 \ --region crGold \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --mlp ~/org/resources/nn_devel_mixing/23_04_23_sgd_gauss_diffusion_tanh300_mse_loss/23_04_23_tanh300_mse.pt \ --nnSignalEff 0.99 \ --vetoPercentile 0.99 \ --lineVeto \ --septemVeto \ --calibFile ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --readonly
likelihood \ -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /t/run_3_mlp_all_vetoes.h5 \ --region crGold \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --mlp ~/org/resources/nn_devel_mixing/23_04_23_sgd_gauss_diffusion_tanh300_mse_loss/23_04_23_tanh300_mse.pt \ --nnSignalEff 0.99 \ --vetoPercentile 0.99 \ --lineVeto \ --septemVeto \ --calibFile ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --readonly
[ ]
Well, issue:- need to find a way to better deal with the cache table for effective efficiencies! It uses strings in the keys, which we currently don't correctly support. Either change them to fixed length or fix the code for variable length strings as part compound types.
- when running likelihood our center event index is not assigned
correctly for the septem veto! Something is up there with the
predicted values or the cut values, such that we never enter the
branch the sets
centerEvIdx.
1.41.
[X]
fixed thecenterEvIdx
bug: the problem was our comparison of the NN prediction value to the cut value interpolator. I missed that the LnL and NN values are inverted!
Running the program to create all likelihood combinations:
./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crAll --regions crGold \ --signalEfficiency 0.8 --signalEfficiency 0.85 --signalEfficiency 0.9 --signalEfficiency 0.95 \ --fadcVetoPercentile 0.99 \ --vetoSets "{fkMLP, fkFadc, fkScinti, fkSeptem, fkLineVeto, fkExclusiveLineVeto}" \ --mlpPath ~/org/resources/nn_devel_mixing/23_04_23_sgd_gauss_diffusion_tanh300_mse_loss/trained_mlp_sgd_gauss_diffusion_tanh300_mse_loss.pt \ --out ~/org/resources/lhood_limits_23_04_23_mlp/ \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing
So we output the files to: ~/org/resources/lhoodlimits230423mlp
1.42.
Running the code last night more or less worked, but in some cases there was a segfault running the code.
I spent most of today debugging this issue. gdb
while useful was
still pretty confusing. valgrind
is still running…
It turns out that
- confusion and bad stack traces are due to injected destructor calls. those break the line information
- the issue is a crash in the destructor of a H5Group
- only triggered with our
try/except
code. We've changed that toif/else
now. Saner anyway.
With this fixed, we can rerun the likelihood combinations. Both using the sigmoid output layer MLP from
and if possible one of the linear output ones. For now start with one of them.[ ]
Note: when running with another MLP still need to regenerate the MLPDesc H5 file!
We'll rerun the command from yesterday. Still, can we speed up the process somehow?
I think we should introduce caching also for the
CutValueInterpolator
.
1.43. &
Finally fixed all the HDF5 data writing stuff.. Will write more about that tomorrow.
Will test likelihood
producing a sensible file now:
First a run to generate the file. Will take a short look, then read it
in a separate script to see what it contains. Then finally rerun the
same command to see if the fake event generation is then skipped.
likelihood -f /home/basti/CastData/data/DataRuns2018_Reco.h5 --h5out /t/blabla_broken.h5 --region=crGold --cdlYear=2018 --scintiveto --fadcveto --septemveto --lineveto --mlp /home/basti/org/resources/nn_devel_mixing/23_04_23_sgd_gauss_diffusion_tanh300_mse_loss/trained_mlp_sgd_gauss_diffusion_tanh300_mse_loss.pt --cdlFile=/home/basti/CastData/data/CDL_2019/calibration-cdl-2018.h5 --readOnly --calibFile=/home/basti/CastData/data/CalibrationRuns2018_Reco.h5 --vetoPercentile=0.99 --nnSignalEff=0.95
[X]
segfault due to string[ ]
HDF5-DIAG: Error detected in HDF5 (1.10.5) thread 0: #000: H5Tcompound.c line 354 in H5Tinsert(): unable to insert member major: Datatype minor: Unable to insert object #001: H5Tcompound.c line 446 in H5T__insert(): member extends past end of compound type major: Datatype minor: Unable to insert object
-> partially fixed it but:
-> We have alignment issues. The H5 library also seems to align data in some cases when necessary. Our code currently assumes there is no such thing.
So I guess we need to go back to an approach that does actually generate some helper type or what?
-> I think I got it all working now. Merged the two approaches of
copying data to a Buffer
and
Ok, the caching seems to work I think. And the generated file looks good.
Let's read the data and see what it contains:
import nimhdf5, tables const CacheTabFile = "/dev/shm/cacheTab_runLocalCutVals.h5" type TabKey = (int, string, float) # ^-- run number # ^-- sha1 hash of the NN model `.pt` file # ^-- target signal efficiency TabVal = seq[(string, float)] # ^-- CDL target # ^-- MLP cut value CacheTabTyp = Table[TabKey, TabVal] var tab = deserializeH5[CacheTabTyp](CacheTabFile) for k, v in tab: echo "Key: ", k, " = ", v
Key: (295, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.974539938569069), ("Al-Al-4kV", 0.9415906369686127), ("C-EPIC-0.6kV", 0.7813522785902023), ("Cu-EPIC-0.9kV", 0.8288368076086045), ("Cu-EPIC-2kV", 0.8751996099948883), ("Cu-Ni-15kV", 0.9722824782133103), ("Mn-Cr-12kV", 0.9686738938093186), ("Ti-Ti-9kV", 0.9641254603862762)] Key: (270, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9771067887544632), ("Al-Al-4kV", 0.9460293263196945), ("C-EPIC-0.6kV", 0.7850535094738007), ("Cu-EPIC-0.9kV", 0.8130916118621826), ("Cu-EPIC-2kV", 0.8937846541404724), ("Cu-Ni-15kV", 0.9715777307748794), ("Mn-Cr-12kV", 0.9703928083181381), ("Ti-Ti-9kV", 0.9622162997722625)] Key: (285, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9717331320047379), ("Al-Al-4kV", 0.9252435654401779), ("C-EPIC-0.6kV", 0.7428321331739426), ("Cu-EPIC-0.9kV", 0.7788086831569672), ("Cu-EPIC-2kV", 0.8636017471551896), ("Cu-Ni-15kV", 0.9687924206256866), ("Mn-Cr-12kV", 0.9663917392492294), ("Ti-Ti-9kV", 0.9504937410354615)] Key: (267, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.975317457318306), ("Al-Al-4kV", 0.9462290376424789), ("C-EPIC-0.6kV", 0.776226544380188), ("Cu-EPIC-0.9kV", 0.834472405910492), ("Cu-EPIC-2kV", 0.8766408234834671), ("Cu-Ni-15kV", 0.9714102745056152), ("Mn-Cr-12kV", 0.9717013716697693), ("Ti-Ti-9kV", 0.9653892040252685)] Key: (240, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9765782624483108), ("Al-Al-4kV", 0.9469905078411103), ("C-EPIC-0.6kV", 0.8053286731243133), ("Cu-EPIC-0.9kV", 0.8526969790458679), ("Cu-EPIC-2kV", 0.8962102770805359), ("Cu-Ni-15kV", 0.9759411454200745), ("Mn-Cr-12kV", 0.9700609385967255), ("Ti-Ti-9kV", 0.9632380992174149)] Key: (244, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9737197607755661), ("Al-Al-4kV", 0.936311411857605), ("C-EPIC-0.6kV", 0.7668724238872529), ("Cu-EPIC-0.9kV", 0.8007214874029159), ("Cu-EPIC-2kV", 0.8769152045249939), ("Cu-Ni-15kV", 0.9682861983776092), ("Mn-Cr-12kV", 0.9659816890954971), ("Ti-Ti-9kV", 0.9604099780321121)] Key: (287, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9702932089567184), ("Al-Al-4kV", 0.9345487594604492), ("C-EPIC-0.6kV", 0.7561751186847686), ("Cu-EPIC-0.9kV", 0.7987294286489487), ("Cu-EPIC-2kV", 0.8735345602035522), ("Cu-Ni-15kV", 0.9701228439807892), ("Mn-Cr-12kV", 0.9675930917263031), ("Ti-Ti-9kV", 0.9575580269098282)] Key: (278, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9720843970775604), ("Al-Al-4kV", 0.9305856913328171), ("C-EPIC-0.6kV", 0.741540789604187), ("Cu-EPIC-0.9kV", 0.7682088732719422), ("Cu-EPIC-2kV", 0.864213228225708), ("Cu-Ni-15kV", 0.9648653626441955), ("Mn-Cr-12kV", 0.9665578484535218), ("Ti-Ti-9kV", 0.9552424371242523)] Key: (250, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9766805320978165), ("Al-Al-4kV", 0.9418972045183182), ("C-EPIC-0.6kV", 0.76584292948246), ("Cu-EPIC-0.9kV", 0.8238443195819855), ("Cu-EPIC-2kV", 0.8775055557489395), ("Cu-Ni-15kV", 0.9738869041204452), ("Mn-Cr-12kV", 0.9712955445051193), ("Ti-Ti-9kV", 0.9667880535125732)] Key: (283, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9675714403390885), ("Al-Al-4kV", 0.9285876244306565), ("C-EPIC-0.6kV", 0.71473089158535), ("Cu-EPIC-0.9kV", 0.7696082562208175), ("Cu-EPIC-2kV", 0.8377785980701447), ("Cu-Ni-15kV", 0.9635965615510941), ("Mn-Cr-12kV", 0.9641305923461914), ("Ti-Ti-9kV", 0.9575453609228134)] Key: (301, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9705011487007141), ("Al-Al-4kV", 0.9279218137264251), ("C-EPIC-0.6kV", 0.7229891419410706), ("Cu-EPIC-0.9kV", 0.793598598241806), ("Cu-EPIC-2kV", 0.8582320868968963), ("Cu-Ni-15kV", 0.9688272416591645), ("Mn-Cr-12kV", 0.9668716788291931), ("Ti-Ti-9kV", 0.9546888172626495)] Key: (274, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.974557614326477), ("Al-Al-4kV", 0.9371200978755951), ("C-EPIC-0.6kV", 0.7444291532039642), ("Cu-EPIC-0.9kV", 0.7895265400409699), ("Cu-EPIC-2kV", 0.8598116517066956), ("Cu-Ni-15kV", 0.9712087035179138), ("Mn-Cr-12kV", 0.9688791006803512), ("Ti-Ti-9kV", 0.9589674681425094)] Key: (242, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9761280864477158), ("Al-Al-4kV", 0.9446888864040375), ("C-EPIC-0.6kV", 0.766591414809227), ("Cu-EPIC-0.9kV", 0.8117899149656296), ("Cu-EPIC-2kV", 0.8900630325078964), ("Cu-Ni-15kV", 0.971353754401207), ("Mn-Cr-12kV", 0.9718088060617447), ("Ti-Ti-9kV", 0.9645727574825287)] Key: (306, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9756848812103271), ("Al-Al-4kV", 0.9455605119466781), ("C-EPIC-0.6kV", 0.7938183635473252), ("Cu-EPIC-0.9kV", 0.8287457168102265), ("Cu-EPIC-2kV", 0.8792453199625015), ("Cu-Ni-15kV", 0.9696165889501571), ("Mn-Cr-12kV", 0.972235518693924), ("Ti-Ti-9kV", 0.9627663731575012)] Key: (303, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9724529892206192), ("Al-Al-4kV", 0.9305596590042114), ("C-EPIC-0.6kV", 0.7230549484491349), ("Cu-EPIC-0.9kV", 0.7953008472919464), ("Cu-EPIC-2kV", 0.8613291561603547), ("Cu-Ni-15kV", 0.9646958172321319), ("Mn-Cr-12kV", 0.9663623839616775), ("Ti-Ti-9kV", 0.9565762877464294)] Key: (291, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9746044754981995), ("Al-Al-4kV", 0.945956015586853), ("C-EPIC-0.6kV", 0.7661843031644822), ("Cu-EPIC-0.9kV", 0.8199316382408142), ("Cu-EPIC-2kV", 0.8820369154214859), ("Cu-Ni-15kV", 0.972326734662056), ("Mn-Cr-12kV", 0.9720319092273713), ("Ti-Ti-9kV", 0.9641989678144455)] Key: (281, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9678589403629303), ("Al-Al-4kV", 0.9328784346580505), ("C-EPIC-0.6kV", 0.7547005414962769), ("Cu-EPIC-0.9kV", 0.7789339125156403), ("Cu-EPIC-2kV", 0.8745017945766449), ("Cu-Ni-15kV", 0.9656407535076141), ("Mn-Cr-12kV", 0.9641033113002777), ("Ti-Ti-9kV", 0.9552679359912872)] Key: (276, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9702300161123276), ("Al-Al-4kV", 0.9325366169214249), ("C-EPIC-0.6kV", 0.7419947564601898), ("Cu-EPIC-0.9kV", 0.7747547417879105), ("Cu-EPIC-2kV", 0.8381759762763977), ("Cu-Ni-15kV", 0.9623401463031769), ("Mn-Cr-12kV", 0.9613375902175904), ("Ti-Ti-9kV", 0.9547658741474152)] Key: (279, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9683325439691544), ("Al-Al-4kV", 0.9248780459165573), ("C-EPIC-0.6kV", 0.7393383264541626), ("Cu-EPIC-0.9kV", 0.7804951041936874), ("Cu-EPIC-2kV", 0.8707629531621933), ("Cu-Ni-15kV", 0.9666939914226532), ("Mn-Cr-12kV", 0.9646541595458984), ("Ti-Ti-9kV", 0.9589365422725677)] Key: (268, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9719461470842361), ("Al-Al-4kV", 0.9368112564086915), ("C-EPIC-0.6kV", 0.7573535948991775), ("Cu-EPIC-0.9kV", 0.8275747984647751), ("Cu-EPIC-2kV", 0.8748720288276672), ("Cu-Ni-15kV", 0.9710170537233352), ("Mn-Cr-12kV", 0.9695031344890594), ("Ti-Ti-9kV", 0.9605758100748062)] Key: (258, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9745412111282349), ("Al-Al-4kV", 0.9339351028203964), ("C-EPIC-0.6kV", 0.7520094603300095), ("Cu-EPIC-0.9kV", 0.8004794657230377), ("Cu-EPIC-2kV", 0.8735634952783584), ("Cu-Ni-15kV", 0.9667491674423218), ("Mn-Cr-12kV", 0.9659819602966309), ("Ti-Ti-9kV", 0.9595000624656678)] Key: (248, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.977547001838684), ("Al-Al-4kV", 0.943942391872406), ("C-EPIC-0.6kV", 0.7833251833915711), ("Cu-EPIC-0.9kV", 0.8251269578933715), ("Cu-EPIC-2kV", 0.8876170873641968), ("Cu-Ni-15kV", 0.9738454699516297), ("Mn-Cr-12kV", 0.9718320488929748), ("Ti-Ti-9kV", 0.9669605851173401)] Key: (246, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9743694633245468), ("Al-Al-4kV", 0.9456877648830414), ("C-EPIC-0.6kV", 0.7678399056196212), ("Cu-EPIC-0.9kV", 0.8119639933109284), ("Cu-EPIC-2kV", 0.8899865686893463), ("Cu-Ni-15kV", 0.9739743441343307), ("Mn-Cr-12kV", 0.9730724036693573), ("Ti-Ti-9kV", 0.966443908214569)] Key: (254, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9705976217985153), ("Al-Al-4kV", 0.9223994761705399), ("C-EPIC-0.6kV", 0.7517837852239608), ("Cu-EPIC-0.9kV", 0.7894137501716614), ("Cu-EPIC-2kV", 0.8740812391042709), ("Cu-Ni-15kV", 0.9649749875068665), ("Mn-Cr-12kV", 0.9622029483318328), ("Ti-Ti-9kV", 0.9560768663883209)] Key: (299, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9735205113887787), ("Al-Al-4kV", 0.9309952080249786), ("C-EPIC-0.6kV", 0.7400766223669052), ("Cu-EPIC-0.9kV", 0.7843391090631485), ("Cu-EPIC-2kV", 0.8648605406284332), ("Cu-Ni-15kV", 0.967012819647789), ("Mn-Cr-12kV", 0.9662521809339524), ("Ti-Ti-9kV", 0.9518016576766968)] Key: (256, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9687688469886779), ("Al-Al-4kV", 0.9329577207565307), ("C-EPIC-0.6kV", 0.7124274164438248), ("Cu-EPIC-0.9kV", 0.7790258109569549), ("Cu-EPIC-2kV", 0.8657959043979645), ("Cu-Ni-15kV", 0.9670290321111679), ("Mn-Cr-12kV", 0.9633038669824601), ("Ti-Ti-9kV", 0.9567578852176666)] Key: (293, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9724262475967407), ("Al-Al-4kV", 0.9334424823522568), ("C-EPIC-0.6kV", 0.7237139195203781), ("Cu-EPIC-0.9kV", 0.8003135979175567), ("Cu-EPIC-2kV", 0.8604774057865143), ("Cu-Ni-15kV", 0.9647800117731095), ("Mn-Cr-12kV", 0.9671240687370301), ("Ti-Ti-9kV", 0.9593446969985961)] Key: (261, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.971569812297821), ("Al-Al-4kV", 0.9378552496433258), ("C-EPIC-0.6kV", 0.7479894399642945), ("Cu-EPIC-0.9kV", 0.8020979523658752), ("Cu-EPIC-2kV", 0.8605112731456757), ("Cu-Ni-15kV", 0.9698976039886474), ("Mn-Cr-12kV", 0.9648890495300293), ("Ti-Ti-9kV", 0.9587083220481872)] Key: (289, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9720789223909378), ("Al-Al-4kV", 0.9372004061937332), ("C-EPIC-0.6kV", 0.7466429948806763), ("Cu-EPIC-0.9kV", 0.7778348356485367), ("Cu-EPIC-2kV", 0.8603413850069046), ("Cu-Ni-15kV", 0.967164334654808), ("Mn-Cr-12kV", 0.9689937591552734), ("Ti-Ti-9kV", 0.9565973937511444)] Key: (298, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9661869823932647), ("Al-Al-4kV", 0.9258407413959503), ("C-EPIC-0.6kV", 0.7028941214084625), ("Cu-EPIC-0.9kV", 0.7626788705587387), ("Cu-EPIC-2kV", 0.8486940711736679), ("Cu-Ni-15kV", 0.9655997604131699), ("Mn-Cr-12kV", 0.9632629603147507), ("Ti-Ti-9kV", 0.9513149082660675)] Key: (265, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9717926532030106), ("Al-Al-4kV", 0.9275272369384766), ("C-EPIC-0.6kV", 0.7334349215030671), ("Cu-EPIC-0.9kV", 0.7848327666521072), ("Cu-EPIC-2kV", 0.879998528957367), ("Cu-Ni-15kV", 0.9686383992433548), ("Mn-Cr-12kV", 0.9665170550346375), ("Ti-Ti-9kV", 0.9604606479406357)] Key: (272, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9744620800018311), ("Al-Al-4kV", 0.9406907320022583), ("C-EPIC-0.6kV", 0.7583581209182739), ("Cu-EPIC-0.9kV", 0.803549861907959), ("Cu-EPIC-2kV", 0.8741284489631653), ("Cu-Ni-15kV", 0.9701647937297821), ("Mn-Cr-12kV", 0.9710357129573822), ("Ti-Ti-9kV", 0.9592038929462433)] Key: (297, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.970538979768753), ("Al-Al-4kV", 0.9283351331949234), ("C-EPIC-0.6kV", 0.7336910545825959), ("Cu-EPIC-0.9kV", 0.7939026236534119), ("Cu-EPIC-2kV", 0.8525126844644546), ("Cu-Ni-15kV", 0.9671412736177445), ("Mn-Cr-12kV", 0.9655984342098236), ("Ti-Ti-9kV", 0.9555086642503738)] Key: (263, "D7DBC196401F3CAC564FBE899306343BEB5022BA", 0.95) = @[("Ag-Ag-6kV", 0.9716018617153168), ("Al-Al-4kV", 0.9364291220903397), ("C-EPIC-0.6kV", 0.7137390047311782), ("Cu-EPIC-0.9kV", 0.7985885977745056), ("Cu-EPIC-2kV", 0.8676451265811921), ("Cu-Ni-15kV", 0.9684803396463394), ("Mn-Cr-12kV", 0.9639465093612671), ("Ti-Ti-9kV", 0.9574855715036392)]
And finally rerun all likelihood combinations:
./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crAll --regions crGold \ --signalEfficiency 0.8 --signalEfficiency 0.85 --signalEfficiency 0.9 --signalEfficiency 0.95 \ --fadcVetoPercentile 0.99 \ --vetoSets "{fkMLP, fkFadc, fkScinti, fkSeptem, fkLineVeto, fkExclusiveLineVeto}" \ --mlpPath ~/org/resources/nn_devel_mixing/23_04_23_sgd_gauss_diffusion_tanh300_mse_loss/trained_mlp_sgd_gauss_diffusion_tanh300_mse_loss.pt \ --out ~/org/resources/lhood_limits_23_04_23_mlp/ \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing
Running the command was somewhat of a failure:
- the default of 8 jobs per process is not enough. We use way more memory than expected. Even with only 4 jobs, there is a risk of one or more being killed!
sometimes (at least in
crAll
combinations) we still get the "broken event" quit error. In this case however we can spy the following:Cluster: 1 of chip 3 has val : 0.9999758005142212 copmare: 0.9467523097991943 Cluster: 0 of chip 5 has val : 0.02295641601085663 copmare: 0.9783756822347641 Cluster: 1 of chip 3 has val : 0.9999998807907104 copmare: 0.9467523097991943 Cluster: 0 of chip 4 has val : 5.220065759203862e-06 copmare: 0.99133480489254 Cluster: 1 of chip 3 has val : 0.9999876022338867 copmare: 0.9467523097991943 Cluster: 0 of chip 3 has val : nan copmare: 0.9545457273721695 Broken event! DataFrame with 3 columns and 1 rows: Idx eventIndex eventNumber chipNumber dtype: int int float 0 10750 35811 3
-> Note the
nan
value for cluster 0 in the last line before the DF print! This means the MLP predicted a value ofnan
for a cluster!
-> We need to debug the likelihood call for one of the cases and isolate the events for that.
[X]
get a combination that causes this:
lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_septem_line_mlp_trained_mlp_sgd_gauss_diffusion_tanh300_mse_loss_vQ_0.99.log
[X]
extract which run is at fault -> Run 92 in this case[X]
Doesnan
appear more often? -> Yes! O(5) times in the same log file[ ]
Rerun the equivalent command for testing and try to debug the cause. -> Reconstruct the command:
likelihood \ -f /home/basti/CastData/data/DataRuns2017_Reco.h5 \ --h5out /t/debug_broken_event.h5 \ --region=crAll \ --cdlYear=2018 \ --scintiveto \ --fadcveto \ --septemveto \ --lineveto \ --mlp /home/basti/org/resources/nn_devel_mixing/23_04_23_sgd_gauss_diffusion_tanh300_mse_loss/trained_mlp_sgd_gauss_diffusion_tanh300_mse_loss.pt \ --cdlFile=/home/basti/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --readOnly \ --calibFile=/home/basti/CastData/data/CalibrationRuns2017_Reco.h5 \ --vetoPercentile=0.99 \ --nnSignalEff=0.85
[X]
ModifycreateAllLikelihoodCombinations
such that each job does not stop on a failure, but retries failed jobs -> Done[ ]
I'll restart the create likelihood combinations command now that we've implemented restart on failure & disabled failing on NaN septem events (i.e. "no cluster found") -> FIX ME
[X]
We've also implemented a version oftoH5
anddeserializeH5
that
retries the writing / reading if the file is locked. This should make it safer to run multiple instances in parallel which might try to access the same file.
1.44.
[X]
Look into the origin of events with a NaN value! -> Likely they just pass by accident due to having NaN? That would be "good"[X]
look at what such clusters look like
Having inserted printing the event number for run 92 with the NaN event, the output is:
Broken event! DataFrame with 3 columns and 1 rows: Idx eventIndex eventNumber chipNumber dtype: int int float 0 10750 35811 3 The event is: @[("eventNumber", (kind: VInt, num: 35811))]
Let's plot that event:
plotData \ --h5file ~/CastData/data/DataRuns2017_Reco.h5 \ --runType rtBackground \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --runs 92 \ --eventDisplay \ --septemboard \ --events 35811
The event is:
Well, even some of the properties are NaN! No wonder it yields a NaN result for the NN prediction. We have to be careful not to accidentally consider them "passing" though! (Which seems to be the case currently)
We've changed the code to always veto NaN events. And as a result the broken event should never happen again, which is why we reintroduced the quit condition. Let's see if the command from yesterday now runs correctly.
Well, great. The process was killed due to memory usage of running 5 jobs concurrently. Guess I'll just wait for all the other jobs to finish now. We're ~half way done or so.
Small problem: In the limit calculation given that our effective efficiency is Run-2/3 specific we need to think about how to handle that. Can we use different efficiencies for different parts of the data? Certainly, but makes everything more complicated. -> Better to compute an average of the two different runs.
Sigh, it seems like even 2 jobs at the same time can use too much memory!
(oh, what a dummy: I didn't even add the 99% case to the createLogL call!)
Therefore:
For now let's try to see what we can get with exactly one setup:
- 99% MLP
- all vetoes except septem veto
./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crAll --regions crGold \ --signalEfficiency 0.99 \ --fadcVetoPercentile 0.99 \ --vetoSets "{fkMLP, +fkFadc, +fkScinti, fkLineVeto}" \ --mlpPath ~/org/resources/nn_devel_mixing/23_04_23_sgd_gauss_diffusion_tanh300_mse_loss/trained_mlp_sgd_gauss_diffusion_tanh300_mse_loss.pt \ --out ~/org/resources/lhood_limits_23_04_23_mlp_sEff_0.99/ \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ #--multiprocessing
We're running without multiprocessing now. :( At least for the crAll cases. We'll do crGold with 2 jobs first.
Gold is finished and now running crAll with a single job. :/
So: Things we want for the meeting:
- background rate in gold region
- background clusters of MLP@99
- background clusters of MLP@99 + vetoes
- expected limits for MLP@99 & MLP@99+vetoes
1.44.1. Background rate in gold region
We will compare:
- lnL@80
- lnL@80 + vetoes
- MLP@99
- MLP@99 + vetoes
plotBackgroundRate \ ~/org/resources/lhood_limits_23_04_23_mlp_sEff_0.99/lhood_c18_R2_crGold_sEff_0.99_mlp_trained_mlp_sgd_gauss_diffusion_tanh300_mse_loss.h5 \ ~/org/resources/lhood_limits_23_04_23_mlp_sEff_0.99/lhood_c18_R3_crGold_sEff_0.99_mlp_trained_mlp_sgd_gauss_diffusion_tanh300_mse_loss.h5 \ ~/org/resources/lhood_limits_23_04_23_mlp_sEff_0.99/lhood_c18_R2_crGold_sEff_0.99_scinti_fadc_line_mlp_trained_mlp_sgd_gauss_diffusion_tanh300_mse_loss_vQ_0.99.h5 \ ~/org/resources/lhood_limits_23_04_23_mlp_sEff_0.99/lhood_c18_R3_crGold_sEff_0.99_scinti_fadc_line_mlp_trained_mlp_sgd_gauss_diffusion_tanh300_mse_loss_vQ_0.99.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold_scinti_fadc_line_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold_scinti_fadc_line_vetoPercentile_0.99.h5 \ --names "MLP@99" --names "MLP@99" --names "MLP@99+V" --names "MLP@99+V" --names "LnL@80" --names "LnL@80" --names "LnL@80+V" --names "LnL@80+V" \ --centerChip 3 \ --title "Background rate CAST, LnL@80%, SGD tanh300 MLE MLP@99% + vetoes" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_run2_3_mlp_0.99_plus_vetoes.pdf \ --outpath ~/Sync/ \ --quiet
1.45.
UPDATE: train_ingrid
code slightly today, so that our
previous hardcoded change to replace subsetPerRun
by 6 *
subsetPerRun
for background data was taken out. That's why I modified
the command below to accommodate for that by adding the
--subsetPerRun 6000
argument!
- the fact that
mcmc_limit
gets stuck ontoH5
is because of theKDtree
, which is massive with O(350 k) entries! [X]
add filter option (energyMin
andenergyMax
) -> shows that in Run-2 alone without any energy filtering & without vetoes there are almost 200k cluster! -> cutting 0.2 < E < 12 and using vetoes cuts it in half O(40k) compared to O(75k) when 0 < E < 12.[X]
Check data selection for background training sample -> OUCH: We still filter to THE GOLD REGION in the background sample. This likely explains why the background is so good there, but horrible towards the edges![X]
write a custom serializer forKDTree
to avoid extremely nested H5 data structure -> We now only serialize the actual data of the tree. The tree can be rebuilt from that after all.[X]
implement filtering on energy < 200 eV formcmc_limit
[X]
also check the impact of that on the limit! -> running right now -> very slight improvement to gae² = 6.28628480082639e-21 from gae² = 6.333984435685045e-21
[ ]
analyze memory usage oflikelihood
when using NN veto[X]
Train a new MLP of same architecture as currently, but using background data over the whole chip! -> [X] CenterX or Y is not part of inputs, right? -> nope, it's not
Here we go:
./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/10_05_23_sgd_gauss_diffusion_tanh300_mse_loss/mlp_tanh300_mse.pt \ --plotPath ~/Sync/10_05_23_sgd_tanh300_mse/ \ --datasets eccentricity \ --datasets skewnessLongitudinal \ --datasets skewnessTransverse \ --datasets kurtosisLongitudinal \ --datasets kurtosisTransverse \ --datasets length \ --datasets width \ --datasets rmsLongitudinal \ --datasets rmsTransverse \ --datasets lengthDivRmsTrans \ --datasets rotationAngle \ --datasets fractionInTransverseRms \ --datasets totalCharge \ --datasets σT \ --numHidden 300 \ --numHidden 300 \ --activation tanh \ --outputActivation sigmoid \ --lossFunction MSE \ --optimizer SGD \ --learningRate 7e-4 \ --simulatedData \ --backgroundRegion crAll \ --nFake 250_000 \ --subsetPerRun 6000
-> Trained up to 500k on
.1.46.
Continue from yesterday:
[ ]
analyze memory usage oflikelihood
when using NN veto -> try withnimprof
-> useless
Regenerating the cache tables:
./determineDiffusion \ ~/CastData/data/DataRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/DataRuns2018_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5
./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crAll --regions crGold \ --signalEfficiency 0.99 \ --fadcVetoPercentile 0.99 \ --vetoSets "{fkMLP, +fkFadc, +fkScinti, fkSeptem, fkLineVeto, fkExclusiveLineVeto}" \ --mlpPath ~/org/resources/nn_devel_mixing/10_05_23_sgd_gauss_diffusion_tanh300_mse_loss/mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.pt \ --out ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/ \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5
With these plots generated and the corresponding sections written, we are almost done with that part of the thesis.
[ ]
Mini section about background rate with MLP[ ]
Extend MLP w/ all vetoes to include MLP
1.47.
[X]
Let's see if running
likelihood
withlnL
instead of the MLP also eats more and more memory!likelihood -f /home/basti/CastData/data/DataRuns2017_Reco.h5 --h5out /t/dm_lnl_slowit.h5 --region=crAll --cdlYear=2018 --scintiveto --fadcveto --septemveto --lineveto --lnL --cdlFile=/home/basti/CastData/data/CDL_2019/calibration-cdl-2018.h5 --readOnly --calibFile=/home/basti/CastData/data/CalibrationRuns2017_Reco.h5 --vetoPercentile=0.99 --signalEff=0.8
-> Not really. It stays nice and low.
Running the following script:
import nimhdf5, os, seqmath, sequtils, datamancer import ingrid / ingrid_types proc readAllChipData(h5f: H5File, group: H5Group, numChips: int): AllChipData = ## Read all data for all chips of this run that we need for the septem veto let vlenXY = special_type(uint8) let vlenCh = special_type(float64) result = AllChipData(x: newSeq[seq[seq[uint8]]](numChips), y: newSeq[seq[seq[uint8]]](numChips), ToT: newSeq[seq[seq[uint16]]](numChips), charge: newSeq[seq[seq[float]]](numChips)) for i in 0 ..< numChips: result.x[i] = h5f[group.name / "chip_" & $i / "x", vlenXY, uint8] result.y[i] = h5f[group.name / "chip_" & $i / "y", vlenXY, uint8] result.ToT[i] = h5f[group.name / "chip_" & $i / "ToT", vlenCh, uint16] result.charge[i] = h5f[group.name / "chip_" & $i / "charge", vlenCh, float] var h5f = H5open("~/CastData/data/DataRuns2017_Reco.h5", "r") let grp = h5f["/reconstruction/run_186/".grp_str] echo grp var df = newDataFrame() for i in 0 ..< 10: let data = readAllChipData(h5f, grp, 7) df = toDf({"x" : data.x.flatten.mapIt(it.float)}) echo "Read: ", i, " = ", getOccupiedMem().float / 1e6, " MB", " df len: ", df.len discard h5f.close()
under valgrind right now.
-> In this setup valgrind
does not see any leaks.
Let's also run it under heaptrack
to see where the 8GB of memory
come from!
-> Running without -d:useMalloc
yields pretty much nothing (as it
seems to intercept malloc calls)
-> With -d:useMalloc
it doesn't look unusual at all
-> the 8GB seen when running without -d:useMalloc
seem to be the
standard Nim allocator doing its thing
-> So this code snippet seems fine.
[ ]
let's try to trim downlikelihood
to still reproduce the memleak issuesee the memory usage of a cpp MLP run without vetoes -> so far seems to run without growing endlessely. -> it grows slightly, approaching 9GB Maybe we can run this under
heaptrack
? -> Yeah, no problems without septem&line veto -> Now trying with only septem veto -> This already crashesheaptrack
! -> Let's trylikelihood
with lnLheaptrack likelihood -f /home/basti/CastData/data/DataRuns2017_Reco.h5 --h5out /t/more_lnl_slowit.h5 --region=crAll --cdlYear=2018 --scintiveto --fadcveto --septemveto --lnL --cdlFile=/home/basti/CastData/data/CDL_2019/calibration-cdl-2018.h5 --readOnly --calibFile=/home/basti/CastData/data/CalibrationRuns2017_Reco.h5 --vetoPercentile=0.99 --signalEff=0.80
[X]
Even withlnL
and septem vetoheaptrack
crashes! Let's see if we understand where.
[ ]
Debug the crashing of
heaptrack
-> cause is in septem veto -> in geometry -> in DBScan reconstruction -> in nearestNeighbor call[X]
serialized a bunch of pixels that cause the crash in/tmp/serial_pixels.h5
-> let's only run DBSCAN on these pixels!
import arraymancer proc callDb(data: Tensor[float]) = echo dbscan(data, 65.0, 3) import nimhdf5 type Foo = object shape: seq[int] data: seq[float] let foo = deserializeH5[Foo]("/tmp/serial_pixels.h5") var pT = foo.data.toTensor.reshape(foo.shape) for _ in 0 ..< 1000: callDb(pT)
-> cannot reproduce the problem this way
[X]
Ok, given thatheaptrack
always seems to crash when using DBSCAN in that context, I just ran it using the default cluster algo for the septem veto logic. This ran correctly and showed no real memory leak. And a peak heap mem size of ~6 GB.[X]
Running again now using default clusterer but using MLP! -> Reproduces the problem andheaptrack
tracks the issue. We're leaking in the H5 library due to identifiers not being closed. I've changed the code now to automatically close identifiers by attaching them to=destroy
calls. -> Also changed the logic to only read the MLP H5 file a single time for the septem veto, because this was the main origin (reading the file for each single cluster!)
UPDATE:
-> We've replaced the distinct hid_t
logic in nimhdf5 by an approach
that wraps the identifiers in a ref object
to make sure we destroy
every single identifiers when it goes out of scope. This fixed the
memory leak that could finally be tracked properly with the following
snippet:
import nimhdf5 type MLPDesc* = object version*: int # Version of this MLPDesc object path*: string # model path to the checkpoint files including the default model name! modelDir*: string # the parent directory of `path` plotPath*: string # path in which plots are placed calibFiles*: seq[string] ## Path to the calibration files backFiles*: seq[string] ## Path to the background data files simulatedData*: bool numInputs*: int numHidden*: seq[int] numLayers*: int learningRate*: float datasets*: seq[string] # Not `InGridDsetKind` to support arbitrary new columns subsetPerRun*: int rngSeed*: int backgroundRegion*: string nFake*: int # number of fake events per run period to generate # activationFunction*: string outputActivation*: string lossFunction*: string optimizer*: string # fields that store training information epochs*: seq[int] ## epochs at which plots and checkpoints are generated accuracies*: seq[float] testAccuracies*: seq[float] losses*: seq[float] testLosses*: seq[float] proc getNumber(file: string): int = let desc = deserializeH5[MLPDesc](file) result = desc.numInputs proc main(fname: string) = for i in 0 ..< 50_000: echo "Number: ", i, " gives ", getNumber(fname) when isMainModule: import cligen dispatch main
running via:
heaptrack ./testmem3 -f ~/org/resources/nn_devel_mixing/23_04_23_sgd_gauss_diffusion_tanh300_mse_loss/mlp_desc.h5
The issue really was the many deserializeH5
calls, which suddenly
highlighted the memory leak!
In the meantime let's look at background clusters of the likelihood
outputs from last night:
1.48.
[ ]
Try again to rerun
heaptrack
on the DBSCAN code:heaptrack likelihood -f /home/basti/CastData/data/DataRuns2017_Reco.h5 --h5out /t/mlp_isit_faster_noleak_dbscan.h5 --region=crAll --cdlYear=2018 --scintiveto --fadcveto --septemveto --mlp ~/org/resources/nn_devel_mixing/10_05_23_sgd_gauss_diffusion_tanh300_mse_loss/mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.pt --cdlFile=/home/basti/CastData/data/CDL_2019/calibration-cdl-2018.h5 --readOnly --calibFile=/home/basti/CastData/data/CalibrationRuns2017_Reco.h5 --vetoPercentile=0.99 --nnSignalEff=0.99
-> Still the same problem! So either a bug in
heaptrack
or a bug in our code still. :/[ ]
Run
perf
on the code usingdefault
clustering algo. Then usehotspot
to check the report:perf record -o /t/log_noleaks_slow.data --call-graph dwarf -- likelihood -f /home/basti/CastData/data/DataRuns2017_Reco.h5 --h5out /t/log_noleaks_debug_slowit. h5 --region=crAll --cdlYear=2018 --scintiveto --fadcveto --septemveto --lineveto --mlp ~/org/resources/nn_devel_mixing/10_05_23_sgd_gauss_diffusion_tanh300_mse_loss/mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.pt --cdlFile =/home/basti/CastData/data/CDL_2019/calibration-cdl-2018.h5 --readOnly --calibFile=/home/basti/CastData/data/CalibrationRuns2017_Reco.h5 --vetoPercentile=0.99 --nnSignalEff=0.99
Killed it after about 20 runs processed as the file already grew to over 15GB. Should be enough statistics… :) -> Ok, the perf data shows that time is spent in a variety of different places.
mergeChain
of the cluster algo is a bigger one, so isforward
of the MLP and generally a whole lot of copying data. Nothing in particular jumps out though.I guess performance for this is acceptable for now.
Let's also look at
DBSCAN
(same command as above): -> Yeah, as expected. ThequeryImpl
call is by far the dominating part of the performance report when using DBSCAN. Theheapqueues
used make up>50% of the time with the distance and ~index_select
andtoTensorTuple
the rest. IntoTensorTuple
the dominating factor ispop
.-> We've replaced the
HeapQueue
by aSortedSeq
now for testing. With it thepop
procedure is much faster (but insert is a bit slower). We've finally added a--run
option tolikelihood
to now time the performance of the heapqueue vs the sorted seq. We'll run on run 168 as it is one of the longest runs. Sorted seq:likelihood -f /home/basti/CastData/data/DataRuns2017_Reco.h5 --h5out 314.52s user 3.16s system 100%
Heap queue:
likelihood -f /home/basti/CastData/data/DataRuns2017_Reco.h5 --h5out 324.78s user 3.12s system 100%
-> Well. Guess it is a bit faster than the heap queue approach, but it's a small improvement. I guess we mostly traded building the sorted seq by popping from it. NOTE: There was a segfault after the run was done?
[X]
just ran heaptrack again on the default clustering case for the full case, because I didn't remember if we did that with the ref object IDs. All looking good now. Peak memory at9.5GB. High, but explained by the 3.7GB overhead of the CUDA (that is "leaked"). Oh, I just googled for the CUDA "leak": https://discuss.pytorch.org/t/memory-leak-in-libtorch-extremely-simple-code/38149/3 -> It's because we don't use "no grad" mode! Hmm, but we *are* using our ~no_grad_mode
template innn_types
and effectively innn_predict
. -> I introduced the "NoGradGuard" into Flamebau and used it in places in addition to theno_grad_mode
and runheaptrack
again. Let's see. -> Didn't change anything!
So I guess that means we continue with our actual work. Performance is deemed acceptable now.
Let's go with a test run of 5 different jobs in
createAllLikelihoodCombinations
:
./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crAll --regions crGold \ --signalEfficiency 0.95 --signalEfficiency 0.90 \ --fadcVetoPercentile 0.99 \ --vetoSets "{fkMLP, +fkFadc, +fkScinti, fkSeptem, fkLineVeto, fkExclusiveLineVeto}" \ --mlpPath ~/org/resources/nn_devel_mixing/10_05_23_sgd_gauss_diffusion_tanh300_mse_loss/mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.pt \ --out ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/ \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing \ --jobs 5
[X]
We get errors for dataspaces. Apparently ournimhdf5
code is problematic in some cases. Let's try to fix that up, run all its tests. ->tread_write
reproduces it -> fixed all the issues by another rewrite of the ID logic
1.49.
NOTE: I'm stopping the following limit calculation at
:shellCmd: mcmc_limit_calculation limit -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.99_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 -f /home/basti/org/resources/lhood_li mits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crAll_sEff_0.99_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 --years 2017 --years 2018 --σ_p 0.05 --limitKind lkMCMC --nmc 1000 --suffix=_sEff_0.99_mlp_mlp_tanh300_msecheckpo int_epoch_485000_loss_0.0055_acc_0.9933 --path "" --outpath /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/limits/ shell 5127> files @["/home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.99_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5", "/home/basti/org/resources/lhood_limits_10_05_23_mlp_sEf f_0.99/lhood_c18_R3_crAll_sEff_0.99_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5"] shell 5127> shell 5127> @["/home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.99_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5", "/home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99 /lhood_c18_R3_crAll_sEff_0.99_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5"] shell 5127> [INFO]: Read a total of 340911 input clusters.
because each MCMC takes too long to build:
shell 5127> MC index 12 shell 5127> Building chain of 150000 elements took 126.9490270614624 s shell 5127> Acceptance rate: 0.2949866666666667 with last two states of chain: @[@[1.250804325635904e-21, 0.02908566421900792, -0.004614450435544698, 0.03164360408307887, 0.03619049049344585], @[1.250804325635904e-21, 0.02908566421900792, -0.004614450435544698, 0.03164360408307887, 0.03619049049344585]] shell 5127> Limit at 7.666033305250432e-21 shell 5127> Number of candidates: 16133 shell 5127> INFO: The integer column `Hist` has been automatically determined to be continuous. To overwrite this behavior add a `+ scale_x/y_discrete()` call to the plotting chain. Choose `x` or `y` depending on which axis this column refers to. Or apply a `factor` to the column name in the `aes` call, i.e. `aes(..., factor("Hist"), ...)`. shell 5127> MC index 12 shell 5127> Building chain of 150000 elements took 133.913836479187 s shell 5127> Acceptance rate: 0.3029 with last two states of chain: @[@[5.394302558724233e-21, 0.005117702932866319, -0.003854151251413666, 0.1140589851066757, -0.01650063836525805], @[5.394302558724233e-21, 0.005117702932866319, -0.003854151251413666, 0.1140589851066757, -0.01650063836525805]] shell 5127> Limit at 8.815294304890919e-21 shell 5127> Number of candidates: 16518 shell 5127> INFO: The integer column `Hist` has been automatically determined to be continuous. To overwrite this behavior add a `+ scale_x/y_discrete()` call to the plotting chain. Choose `x` or `y` depending on which axis this column refers to. Or apply a `factor` to the column name in the `aes` call, i.e. `aes(..., factor("Hist"), ...)`. shell 5127> MC index 12
(due to the ~16k candidates)
I'll add the file to processed.txt
and start the rest now.
shell 5127> Initial chain state: @[3.325031213438127e-21, -0.005975150812670194, 0.2566543411242529, 0.1308918272537833, 0.3838098582402962] ^C Interrupted while running processing of file: /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.99_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 Check the `processed.txt` file in the output path to see which files were processed successfully!
Added that file manually to the processed.txt
file now. Restarting:
./runLimits --path ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/ --outpath ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/limits/ --prefix lhood_c18_R2_crAll --nmc 1000
Doing the same with 90% MLP:
^C Interrupted while running processing of file: /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.9_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 Check the `processed.txt` file in the output path to see which files were processed successfully!
Restarted with the veto based ones for 90% left.
Running the expected limits table generator:
[X]
I updated it to include the MLP efficiencies. Still have to change it so that it prints whether MLP or LnL was in use!
./generateExpectedLimitsTable \ -p ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/limits/ \ --prefix mc_limit_lkMCMC_skInterpBackground_nmc_1000
yields:
εlnL | MLP | MLPeff | Scinti | FADC | εFADC | Septem | Line | eccLineCut | εSeptem | εLine | εSeptemLine | Total eff. | Limit no signal | Expected Limit |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.8 | 0.95 | 0.9107 | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.7677 | 3.7078e-21 | 7.7409e-23 |
0.8 | 0.99 | 0.9718 | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.8192 | 3.509e-21 | 7.871e-23 |
0.8 | 0.9 | 0.8474 | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.7143 | 3.8986e-21 | 7.9114e-23 |
0.8 | 0.95 | 0.9107 | false | false | 0.98 | false | false | 1 | 0.7841 | 0.8602 | 0.7325 | 0.9107 | 3.1115e-21 | 8.1099e-23 |
0.8 | 0.85 | 0.7926 | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6681 | 4.2397e-21 | 8.1234e-23 |
0.8 | 0.8 | 0.7398 | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6237 | 4.5115e-21 | 8.2423e-23 |
0.8 | 0.85 | 0.7926 | false | false | 0.98 | false | false | 1 | 0.7841 | 0.8602 | 0.7325 | 0.7926 | 3.6449e-21 | 8.3336e-23 |
0.8 | 0.99 | 0.9718 | true | true | 0.98 | true | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6976 | 4.0701e-21 | 8.3474e-23 |
0.8 | 0.99 | 0.9718 | true | true | 0.98 | true | false | 1 | 0.7841 | 0.8602 | 0.7325 | 0.7468 | 3.8991e-21 | 8.3492e-23 |
0.8 | 0.8 | 0.7398 | false | false | 0.98 | false | false | 1 | 0.7841 | 0.8602 | 0.7325 | 0.7398 | 3.9209e-21 | 8.4438e-23 |
0.8 | 0.95 | 0.9107 | true | true | 0.98 | true | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6538 | 4.2749e-21 | 8.4451e-23 |
0.8 | 0.9 | 0.8474 | true | true | 0.98 | true | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6083 | 4.6237e-21 | 8.4821e-23 |
0.8 | 0.95 | 0.9107 | true | true | 0.98 | true | false | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6998 | 4.049e-21 | 8.5324e-23 |
0.8 | 0.9 | 0.8474 | true | true | 0.98 | true | false | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6511 | 4.2498e-21 | 8.5486e-23 |
0.8 | 0.85 | 0.7926 | true | true | 0.98 | true | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.569 | 4.9101e-21 | 8.7655e-23 |
0.8 | 0.85 | 0.7926 | true | true | 0.98 | true | false | 1 | 0.7841 | 0.8602 | 0.7325 | 0.609 | 4.6382e-21 | 8.7954e-23 |
0.8 | 0.8 | 0.7398 | true | true | 0.98 | true | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.5311 | 5.241e-21 | 8.8823e-23 |
0.8 | 0.8 | 0.7398 | true | true | 0.98 | true | false | 1 | 0.7841 | 0.8602 | 0.7325 | 0.5685 | 4.7938e-21 | 8.8924e-23 |
1.50.
Continue on from meeting with Klaus on
:[ ]
Start by fixing the systematics -> start by extracting the systematic values used to compute the current number. The default values are:
σ_sig = 0.04692492913207222, σ_back = 0.002821014576353691, σ_p = 0.05,
computed from section
sec:systematics:combined_uncertainties
in StatusAndProgress via the following code (stripped down to signal):import math, sequtils let ss = [3.3456, 0.5807, 1.0, 0.2159, 2.32558, 0.18521] #2.0] #1.727] ## ^-- This is the effective efficiency for 55Fe apparently. proc total(vals: openArray[float]): float = for x in vals: result += x * x result = sqrt(result) let ss0 = total(ss) let ss17 = total(concat(@ss, @[1.727])) let ss2 = total(concat(@ss, @[2.0])) echo "Combined uncertainty signal (Δ software eff = 0%): ", ss0 / 100.0 echo "Combined uncertainty signal (Δ software eff = 1.727%): ", ss17 / 100.0 echo "Combined uncertainty signal (Δ software eff = 2%): ", ss2 / 100.0
[X]
There is one mystery here: The value that comes out of that
calculation is
0.458 instead of the ~0.469 used in the code. I don't know why that is exactly. -> *SOLVED*: The reason is the ~0.469 is from assuming 2%, but the ~0.458 is from using the value ~1.727
!So now all we need to do is to combine the value without the software efficiency numbers with the effective efficiency from the MLP! (or using the 2% for LnL).
We do this by:
import math let ss = [3.3456, 0.5807, 1.0, 0.2159, 2.32558, 0.18521] proc total(vals: openArray[float]): float = for xf in vals: let x = xf / 100.0 result += x * x result = sqrt(result) let sStart = total(ss) # / 100.0 doAssert abs(sStart - 0.04244936953654317) < 1e-5 # add new value: let seff = 1.727 / 100.0 let s17 = sqrt( (sStart)^2 + seff^2 ) echo "Uncertainty including Δseff = 1.727% after the fact: ", s17
[X]
Implemented
1.50.1. look into the cluster centers
…that are affected by the more noisy behavior of the LnL
[X]
create background cluster plot for MLP@95% for Run-2 and Run-3 separately
plotBackgroundClusters \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 \ --zMax 30 --title "X-ray like clusters CAST MLP@95+no vetoes" \ --outpath ~/Sync/mlp_10_05_23_eff_95_find_noisy_pixels/ \ --suffix "10_05_23_mlp_0.95_only_mlp" \ --energyMax 12.0 --energyMin 0.2
yields file:///home/basti/Sync/mlp_10_05_23_eff_95_find_noisy_pixels/background_cluster_centers10_05_23_mlp_0.95_only_mlp.pdf]] and running with the existing noise filter:
plotBackgroundClusters \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 \ --zMax 30 --title "X-ray like clusters CAST MLP@95+no vetoes" \ --outpath ~/Sync/mlp_10_05_23_eff_95_find_noisy_pixels/ \ --suffix "10_05_23_mlp_0.95_only_mlp_filter_noisy" \ --energyMax 12.0 --energyMin 0.2 \ --filterNoisyPixels
~/Sync/mlp100523eff95findnoisypixels/backgroundclustercenters100523mlp0.95onlymlpfilternoisy.pdf]]
[X]
Need to add pixel at bottom -> added(66, 107)
[ ]
add noisy thing in Run-2 smaller above[ ]
add all "Deiche"
[X]
look at plot with vetoes (including septem)
plotBackgroundClusters \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ --zMax 30 --title "X-ray like clusters CAST MLP@95+allv" \ --outpath ~/Sync/mlp_10_05_23_eff_95_find_noisy_pixels/ \ --suffix "10_05_23_mlp_0.95_mlp_with_vetoes" \ --energyMax 12.0 --energyMin 0.2 plotBackgroundClusters \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ --zMax 30 --title "X-ray like clusters CAST MLP@95+allv" \ --outpath ~/Sync/mlp_10_05_23_eff_95_find_noisy_pixels/ \ --suffix "10_05_23_mlp_0.95_mlp_with_vetoes_filter_noisy" \ --energyMax 12.0 --energyMin 0.2 \ --filterNoisyPixels
-> removes most things!
[X]
without septem but with line:
plotBackgroundClusters \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ --zMax 30 --title "X-ray like clusters CAST MLP@95+noseptem" \ --outpath ~/Sync/mlp_10_05_23_eff_95_find_noisy_pixels/ \ --suffix "10_05_23_mlp_0.95_mlp_vetoes_noseptem" \ --energyMax 12.0 --energyMin 0.2 plotBackgroundClusters \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ --zMax 30 --title "X-ray like clusters CAST MLP@95+noseptem" \ --outpath ~/Sync/mlp_10_05_23_eff_95_find_noisy_pixels/ \ --suffix "10_05_23_mlp_0.95_mlp_vetoes_noseptem_filter_noisy" \ --energyMax 12.0 --energyMin 0.2 \ --filterNoisyPixels
[X]
same plots for Run-3 (without filter noisy pixels, as there's no difference)
plotBackgroundClusters \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crAll_sEff_0.95_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 \ --zMax 30 --title "X-ray like clusters CAST MLP@95+no vetoes" \ --outpath ~/Sync/mlp_10_05_23_eff_95_find_noisy_pixels/ \ --suffix "10_05_23_mlp_R3_0.95_only_mlp" \ --energyMax 12.0 --energyMin 0.2 plotBackgroundClusters \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ --zMax 30 --title "X-ray like clusters CAST MLP@95+allv" \ --outpath ~/Sync/mlp_10_05_23_eff_95_find_noisy_pixels/ \ --suffix "10_05_23_mlp_R3_0.95_mlp_with_vetoes" \ --energyMax 12.0 --energyMin 0.2 plotBackgroundClusters \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ --zMax 30 --title "X-ray like clusters CAST MLP@95+noseptem" \ --outpath ~/Sync/mlp_10_05_23_eff_95_find_noisy_pixels/ \ --suffix "10_05_23_mlp_R3_0.95_mlp_vetoes_noseptem" \ --energyMax 12.0 --energyMin 0.2
-> Looks SO MUCH better! Why the heck is that?
We can gather:
- No need for filtering of noisy pixels in the Run-3 dataset!
- just add "Deiche" and the individual noisy thing in the Run-2 dataset still visible above the "known" point.
For the secondary cluster we'll redo the Run-2 noisy pixel filter plot without vetoes:
plotBackgroundClusters \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 \ --zMax 30 --title "X-ray like clusters CAST MLP@95+no vetoes" \ --outpath ~/Sync/mlp_10_05_23_eff_95_find_noisy_pixels/ \ --suffix "10_05_23_mlp_0.95_only_mlp_filter_noisy_find_clusters" \ --energyMax 12.0 --energyMin 0.2 \ --filterNoisyPixels
Ok, I think I've eliminated all important pixels. We could think
about taking out "one more radius" essentially. If plotting without
--zMax
there is still a "ring" left in each of the primary clusters.
1.50.2. Recomputing the limits
To recompute the limits we run
./runLimits \ --path ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/ \ --outpath ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/limits_fix_sigEff_noiseFilter \ --prefix lhood_c18_R2_crAll \ --nmc 1000
Note the adjusted output directory.
We start with the following processed.txt
file already in the output
directory. That way we skip the "no vetoes" cases completely.
/home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.85_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.8_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.99_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.9_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5
Generate the expected limits table:
./generateExpectedLimitsTable \ -p ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/limits_fix_sigEff_noiseFilter/ \ --prefix mc_limit_lkMCMC_skInterpBackground_nmc_1000
εlnL | MLP | MLPeff | Scinti | FADC | εFADC | Septem | Line | eccLineCut | εSeptem | εLine | εSeptemLine | Total eff. | Limit no signal | Expected Limit |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.8 | 0.95 | 0.9107 | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.7677 | 3.638e-21 | 7.7467e-23 |
0.8 | 0.99 | 0.9718 | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.8192 | 3.3403e-21 | 7.8596e-23 |
0.8 | 0.9 | 0.8474 | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.7143 | 3.8192e-21 | 7.8876e-23 |
0.8 | 0.85 | 0.7926 | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6681 | 4.3209e-21 | 8.1569e-23 |
0.8 | 0.8 | 0.7398 | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6237 | 4.6466e-21 | 8.1907e-23 |
0.8 | 0.95 | 0.9107 | true | true | 0.98 | true | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6538 | 4.464e-21 | 8.2924e-23 |
0.8 | 0.9 | 0.8474 | true | true | 0.98 | true | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6083 | 4.7344e-21 | 8.4169e-23 |
0.8 | 0.99 | 0.9718 | true | true | 0.98 | true | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6976 | 4.0383e-21 | 8.4416e-23 |
0.8 | 0.99 | 0.9718 | true | true | 0.98 | true | false | 1 | 0.7841 | 0.8602 | 0.7325 | 0.7468 | 3.867e-21 | 8.4691e-23 |
0.8 | 0.95 | 0.9107 | true | true | 0.98 | true | false | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6998 | 3.9627e-21 | 8.5747e-23 |
0.8 | 0.9 | 0.8474 | true | true | 0.98 | true | false | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6511 | 4.3262e-21 | 8.6508e-23 |
0.8 | 0.85 | 0.7926 | true | true | 0.98 | true | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.569 | 4.8645e-21 | 8.7205e-23 |
0.8 | 0.85 | 0.7926 | true | true | 0.98 | true | false | 1 | 0.7841 | 0.8602 | 0.7325 | 0.609 | 4.7441e-21 | 8.8143e-23 |
0.8 | 0.8 | 0.7398 | true | true | 0.98 | true | false | 1 | 0.7841 | 0.8602 | 0.7325 | 0.5685 | 5.0982e-21 | 8.8271e-23 |
0.8 | 0.8 | 0.7398 | true | true | 0.98 | true | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.5311 | 5.1353e-21 | 8.8536e-23 |
1.51.
[ ]
IMPORTANT: When using MLP in classification, how do we deal with inputs that have NaN values??? Do we filter them out, i.e. reject them?
let data = h5f.readValidDsets(grp, filterNan = false)
-> How do we deal with them? Ah, from likelihood:
(classify(nnPred[ind]) == fcNaN or # some clusters are NaN due to certain bad geometry, kick those out! # -> clusters of sparks on edge of chip
However, is this actually enough? Maybe there are cases where the result is not NaN for some bizarre reason? Shouldn't be allowed to happen, because NaN "infects" everything.
Still: check the mapping of input NaN values to output NaN values! Any weirdness? What do outputs of all those many clusters in the corners look like? Maybe there is something to learn there? Same with very low energy behavior.
1.52.
I noticed that there seems to be a bug in the sorted_seq
based k-d
tree implementation of arraymancer! Some test cases do not pass!
[ ]
INVESTIGATE
This might be essential for the limit calculation!
Also: Yesterday evening in discussion with Cris she told me that their group thinks the focal spot of the telescope is actually in the center of the detector chamber and not the focal plane as we always assumed!
[ ]
FIND OUT -> I wrote a message to Johanna to ask if she knows anything more up to date
1.53.
[ ]
recompile MCMC limit program with different leaf sizes of the k-d tree (try 64)[ ]
Redo limit calculations for the best 3 cases of MLP & LnL with 10000 toys[ ]
compute the axion image again in the actual focal spot, then use that input to generate the limit for the best case of the above![X]
I moved the data from ./../CastData/data/ to ./../../../mnt/1TB/Uni/DataFromHomeCastData/ to make more space on the home partition!
1.53.1. Expected limits with more toys
Let's get the table from the limit method talk ./Talks/LimitMethod/limit_method.html
Method | \(ε_S\) | FADC | \(ε_{\text{FADC}}\) | Septem | Line | Total eff. | Limit no signal | Expected Limit |
---|---|---|---|---|---|---|---|---|
MLP | 0.9107 | true | 0.98 | false | true | 0.7677 | 6.0315e-23 | 7.7467e-23 |
MLP | 0.9718 | true | 0.98 | false | true | 0.8192 | 5.7795e-23 | 7.8596e-23 |
MLP | 0.8474 | true | 0.98 | false | true | 0.7143 | 6.1799e-23 | 7.8876e-23 |
LnL | 0.9 | true | 0.98 | false | true | 0.7587 | 6.1524e-23 | 7.9443e-23 |
LnL | 0.9 | false | 0.98 | false | true | 0.7742 | 6.0733e-23 | 8.0335e-23 |
MLP | 0.7926 | true | 0.98 | false | true | 0.6681 | 6.5733e-23 | 8.1569e-23 |
MLP | 0.7398 | true | 0.98 | false | true | 0.6237 | 6.8165e-23 | 8.1907e-23 |
We'll do the first 3 MLP rows and the two LnL rows.
Instead of using the tool that runs limit calcs for all input files, we'll manually call the limit calc.
We'll put the output into ./resources/lhood_limits_12_06_23_10k_toys
mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crAll_sEff_0.95_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 \ --years 2017 --years 2018 \ --σ_p 0.05 \ --limitKind lkMCMC \ --nmc 1000 \ --suffix=_sEff_0.95_mlp_mlp_tanh300_mse_epoch_485000_loss_0.0055_acc_0.9933 \ --path "" \ --outpath /home/basti/org/resources/lhood_limits_12_06_23_10k_toys
NOTE
: The file used here was the wrong one. Didn't include any vetoes..1.54.
Yesterday I sent a mail to multiple people from CAST who might know about the beamline design behind the LLNL telescope to find out the intended idea for the focal spot. The question is where in the detector the focal spot was intended to be.
I was always under the impression the focal spot was on the readout plane, but in a discussion with Cristina she mentioned that she thinks it's in the center of the gas volume.
See the mail titled "CAST detectors behind LLNL telescope" (sent from uni bonn address via fastmail).
Turns out from Igor's and Juan's answer the idea was indeed to place the focal spot in the center of the volume!
This is massive and means we need to recompute the raytracing image!
[X]
recompute the axion image! -> DONE![ ]
Rerun the correct expected limit calculations incl vetoes! What we ran yesterday didn't include vetoes, hence so slow![ ]
Rerun the same case (best case) with the correct axion image![ ]
Think about whether we ideally really should have a systematic for the z position of the detector. I.e. varying it change the size of the axion image
1.54.1. DONE Updating the axion image
Regenerate the DF for the solar flux:
cd ~/CastData/ExternCode/AxionElectronLimit/src
./readOpacityFile
Note that the config.toml
file contains the output path (out
directory) and output file name solar_model_dataframe.csv
.
That file should then be moved to resources
and set in the config
file as the DF to use.
In the raytracer we now set the distance correctly using the config file:
[DetectorInstallation] useConfig = true # sets whether to read these values here. Can be overriden here or useng flag `--detectorInstall` # Note: 1500mm is LLNL focal length. That corresponds to center of the chamber! distanceDetectorXRT = 1497.2 # mm distanceWindowFocalPlane = 0.0 # mm lateralShift = 0.0 # mm lateral ofset of the detector in repect to the beamline transversalShift = 0.0 # mm transversal ofset of the detector in repect to the beamline #0.0.mm #
\(\SI{1497}{mm}\) comes from the mean conversion point being at \(\SI{12.2}{mm}\) behind the detector window. If the focal point at \(\SI{1500}{mm}\) is in th center of the chamber, the point to compute the image for is at \(1500 - (15 - 12.2) = 1500 - 2.8 = 1497\).
We will also compare it for sanity with the old axion image we've been using in the limit calculation, namely at \(1470 + 12.2 = 1482.2\).
We generated the following files:
with the data files in
- ./resources/axion_images/axion_image_2018_1000.csv
- ./resources/axion_images/axion_image_2018_1470_12.2mm.csv
- ./resources/axion_images/axion_image_2018_1497.2mm.csv
- ./resources/axion_images/axion_image_2018_1500.csv
First of all the 1000mm case shows us that reading from the config file actually works. Then we can compare to the actual center, the old used value and the new. The difference between the old and new is quite profound!
1.54.2. Expected limit calculations
The calculation we started yesterday didn't use the correct input files…
We used the version without any vetoes instead of the MLP + FADC + Line veto case! Hence it was also so slow!
[X]
Case 1:
MLP | 0.9107 | true | 0.98 | false | true | 0.7677 | 6.0315e-23 | 7.7467e-23 |
mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ --years 2017 --years 2018 \ --σ_p 0.05 \ --limitKind lkMCMC \ --nmc 10000 \ --suffix=_sEff_0.95_scinti_fadc_line_mlp_tanh300_mse_epoch_485000_loss_0.0055_acc_0.9933 \ --path "" \ --outpath /home/basti/org/resources/lhood_limits_12_06_23_10k_toys
-> This seems to take about 20s per chain. 10000 might still be too many until the meeting at 4pm if we want to update the plots. NOTE: for now I restarted it with only 1000 toys! Yielded:
Expected limit: 6.001089083451825e-21
which is \(g_{ae} g_{aγ} = \SI{7.746e-23}{GeV^{-1}}\).
[ ]
Case 2:
MLP | 0.9718 | true | 0.98 | false | true | 0.8192 | 5.7795e-23 | 7.8596e-23 |
mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.99_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crAll_sEff_0.99_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ --years 2017 --years 2018 \ --σ_p 0.05 \ --limitKind lkMCMC \ --nmc 1000 \ --suffix=_sEff_0.99_scinti_fadc_line_mlp_tanh300_mse_epoch_485000_loss_0.0055_acc_0.9933 \ --path "" \ --outpath /home/basti/org/resources/lhood_limits_12_06_23_10k_toys
- Rerunning with new axion image
- Recompile the limit code with the new axion image, namely: ./resources/axion_images/axion_image_2018_1497.2mm.csv
Now run the calculation again with a new suffix! (sigh, I had started with the full 10k samples :( )
mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ --years 2017 --years 2018 \ --σ_p 0.05 \ --limitKind lkMCMC \ --nmc 10000 \ --suffix=_sEff_0.95_scinti_fadc_line_mlp_tanh300_mse_epoch_485000_loss_0.0055_acc_0.9933_fixed_axion_image_1497.2 \ --path "" \ --outpath /home/basti/org/resources/lhood_limits_12_06_23_10k_toys
The result:
Expected limit: 5.789608342190943e-21
implying \(g_{ae} g_{aγ} = \SI{7.609e-11}{GeV^{-1}}\)
In order to redo one of the plots we can follow the following from when we ran the 100k toy sets:
./mcmc_limit_calculation \ limit --plotFile \ ~/org/resources/mc_limit_lkMCMC_skInterpBackground_nmc_100000_uncertainty_ukUncertain_σs_0.0469_σb_0.0028_posUncertain_puUncertain_σp_0.0500.csv \ --xLow 2.5e-21 \ --xHigh 1.5e-20 \ --limitKind lkMCMC \ --yHigh 3000 \ --bins 100 \ --linesTo 2000 \ --xLabel "Limit [g_ae² @ g_aγ = 1e-12 GeV⁻¹]" \ --yLabel "MC toy count" \ --nmc 100000
1.54.3. Limit talk for my group at
Today at 4pm I gave the talk about the limit method to my colleagues.
The talk as of right now: ./Talks/LimitMethod/limit_method.html (see the commit from today adding the sneak preview to see the talk as given today).
Some takeaways:
- it would have been good to also have some background rate plots & improvements of the vetoes etc
- show the background cluster plot when talking about background rate being position dependent. Unclear otherwise why needed
- better explain how the position nuisance parameter and generally the parameters works
[ ]
Show the likelihood function with nuisance parameters without modification[ ]
Finish the slide about background interpolation w/ normal distr. weighting[ ]
fix numbers for scintillator veto[ ]
remove "in practice" section of candidate sampling for expected limits. Move that to actual candidate sampling thing[ ]
better explain what the likelihood function looks like when talking about the limit at 95% CDF (seeing the histogram there made it a bit confusing!)[ ]
Update plot of expected limit w/ many candidates![ ]
better explain no candidates in sensitive region?[ ]
Klaus said I should add information about the time varying uncertainty stuff into the talk
Discussions:
- discussion about the estimate of the efficiencies of septem & line veto. Tobi thinks it should be improved by sampling not from all outer chip data, because events with center chip X-ray like cluster typically are shorter than 2.2 s and therefore they should see slightly less background! -> Discussed with Klaus, it could be improved, but it's not trivial
- Johanna asked about axion image & where the position comes from etc. -> Ideally make energy dependent and even compute the axion flux by sampling from the absorption position distribution
- Jochen wondered about the "feature" in 3 keV interpolated background in the top right (but not quite corner) -> Already there in the data, likely statistics
- Klaus thinks we shouldn't include systematics for the uncertainty of the solar model!
- We discussed improving systematics by taking into account the average position of the Sun over the data taking period!
- Markus asked about simulated events for MLP training
[X]
Send Johanna my notes about how I compute the mean absorption position!
1.55.
[X]
Send Johanna my notes about how I compute the mean absorption position![ ]
Potentially take out systematic uncertainty for solar model[ ]
Think about systematic of Sun ⇔ Earth, better estimate number by using real value as mean.[ ]
SEE TODOs FROM YESTERDAY
UPDATE:
In the end I spent the majority of the day working on the notes for Johanna for the mean conversion point of solar X-rays from axions in the gas. Turns out my previous assumption was wrong after all. Not 1.22 cm but rather about 0.55 cm for the mean and 0.3 cm are realistic numbers. :)1.56.
We started writing notes on the LLNL telescope for the REST raytracer yesterday and finished them today, here: ./Doc/LLNL_def_REST_format/llnl_def_rest_format.html
In addition, let's now quickly try to generate the binary files
required by REST in nio
format.
Our LLNL file is generated from ./../CastData/ExternCode/AxionElectronLimit/tools/llnl_layer_reflectivity.nim and lives here: ./../CastData/ExternCode/AxionElectronLimit/resources/llnl_layer_reflectivities.h5
Let's look into it:
import nimhdf5 const path = "/home/basti/CastData/ExternCode/AxionElectronLimit/resources/llnl_layer_reflectivities.h5" let h5f = H5open(path, "r") h5f.visit_file() let grp = h5f["/".grp_str] for dset in grp: echo dset.name, " of shape ", dset.shape discard h5f.close()
From the raytracing code or the code generating the file we can remind ourselves about the layout of he reflectivity datasets:
let energies = h5f["/Energy", float] let angles = h5f["/Angles", float] var reflectivities = newSeq[Interpolator2DType[float]]() for i in 0 ..< numCoatings: let reflDset = h5f[("Reflectivity" & $i).dset_str] let data = reflDset[float].toTensor.reshape(reflDset.shape) reflectivities.add newBilinearSpline( data, (angles.min, angles.max), (energies.min, energies.max) )
newBilinearSpline
takes the x limits first and then the y limits,
meaning the first dimension is the angles and the second the energy.
Given that we have 4 different coatings:
let m1 = drp.Multilayer(MultilayerType="DepthGraded", SubstrateMaterial="SiO2", D_min = 11.5, D_max = 22.5, Gamma = 0.45, C = 1.0, LayerMaterial=["Pt", "C"], Repetition=2, SigmaValues=[1.0]) let m2 = drp.Multilayer(MultilayerType="DepthGraded", SubstrateMaterial="SiO2", D_min = 7.0, D_max = 19.0, Gamma = 0.45, C = 1.0, LayerMaterial=["Pt", "C"], Repetition=3, SigmaValues=[1.0]) let m3 = drp.Multilayer(MultilayerType="DepthGraded", SubstrateMaterial="SiO2", D_min = 5.5, D_max = 16.0, Gamma = 0.4, C = 1.0, LayerMaterial=["Pt", "C"], Repetition=4, SigmaValues=[1.0]) let m4 = drp.Multilayer(MultilayerType="DepthGraded", SubstrateMaterial="SiO2", D_min = 5.0, D_max = 14.0, Gamma = 0.4, C = 1.0, LayerMaterial=["Pt", "C"], Repetition=5, SigmaValues=[1.0])
we need to generate 4 different binary files for REST.
Our data has 1000x1000 elements, meaning our filename will be
.N1000f
.
First let's download a file from REST though and open it with nio
.
cd /t
wget https://github.com/rest-for-physics/axionlib-data/raw/master/opticsMirror/Reflectivity_Single_Au_250_Ni_0.4.N901f
import nio const Size = 1000 #901 #const path = "/tmp/Reflectivity_Single_Au_250_Ni_0.4.N901f" const path = "/tmp/R1.N1000f" #let fa = initFileArray[float32](path) #echo fa #let mm = mOpen(path) #echo mm let fa = load[array[Size, float32]](path) #echo fa[0] echo fa import ggplotnim, sequtils block Angle: let df = toDf({"x" : toSeq(0 ..< Size), "y" : fa[0].mapIt(it.float)}) ggplot(df, aes("x", "y")) + geom_line() + ggsave("/tmp/test_angle.pdf") block Refl: var refl = newSeq[float]() var i = 0 for row in fa: refl.add row[400] echo "I = ", i inc i echo fa.len echo refl.len let df = toDf({"x" : toSeq(0 ..< fa.len), "y" : refl}) echo df ggplot(df, aes("x", "y")) + geom_line() + ggsave("/tmp/test_refl.pdf") for j, row in fa: echo "I = ", j echo row[0 ..< 100] for x in countup(0, 10, 1): echo x
What we saw from the above:
- it's the transposed of our data, each row is all energies for one angle whereas ours is all angles for one energy
Now let's save each reflectivity file using nio
. We read it,
transform it into a arrays of the right size and write.
import nimhdf5, strutils, sequtils, arraymancer import nio const path = "/home/basti/CastData/ExternCode/AxionElectronLimit/resources/llnl_layer_reflectivities.h5" let h5f = H5open(path, "r") h5f.visit_file() let grp = h5f["/".grp_str] for dset in grp: echo dset.name, " of shape ", dset.shape if "Reflectivity" in dset.name: let data = toTensor(h5f[dset.name, float]).asType(float32) # convert to float32 .reshape(dset.shape) .transpose # transpose our data echo data.shape let dataS = data.toSeq2D echo dataS.shape # convert to seq[array] var dataA = newSeq[array[1000, float32]](1000) for i in 0 ..< dataS.len: copyMem(dataA[i][0].addr, dataS[i][0].addr, 1000 * sizeof(float32)) dataA.save "/tmp/R1" discard h5f.close()
The above is enough to correctly generate the data files. However, the range of the files used by REST does not match what our data needs.
The README for REST about the data files says:
The following directory contains data files with X-ray reflectivity pre-downloaded data from the https://henke.lbl.gov/ database. These files will be generated by the TRestAxionOpticsMirror metadata class. The files will be used by that class to quickly load reflectivity data in memory, in case the requested optics properties are already available a this database.
See TRestAxionOpticsMirror documentation for further details on how to generate or load these datasets.
The file is basically a table with 501 rows, each row corresponding to an energy, starting at 30eV in increments of 30eV. The last row corresponds with 15keV. The number of columns is 901, describing the data as a function of the angle of incidence in the range between 0 and 9 degrees with 0.01 degree precision
which tells us in what range we need to generate the data.
Our data:
- θ = (0, 1.5)°
- E = (0.03, 15) keV
- 1000x1000
REST:
- θ = (0, 9)°
- E = (0.03, 15) keV
- 901x500
So let's adjust our data generation script and rerun it with the correct ranges and number of points:
I've changed the generator code such that it accepts arguments:
Usage: main [optional-params] Options: -h, --help print this cligen-erated help --help-syntax advanced: prepend,plurals,.. -e=, --energyMin= keV 0.03 keV set energyMin --energyMax= keV 15 keV set energyMax -n=, --numEnergy= int 1000 set numEnergy -a=, --angleMin= ° 0 ° set angleMin --angleMax= ° 1.5 ° set angleMax --numAngle= int 1000 set numAngle -o=, --outfile= string "llnl_layer_reflectivities.h5" set outfile --outpath= string "../resources/" set outpath
./llnl_layer_reflectivity \ --numEnergy 500 \ --angleMax 9.0.° \ --numAngle 901 \ --outfile llnl_layer_reflectivities_rest.h5 \ --outpath /home/basti/org/resources/
Having generated the correct file we can now use the above snippet to
construct the binary files using nio
.
import nimhdf5, strutils, sequtils, arraymancer import nio const path = "/home/basti/org/resources/llnl_layer_reflectivities_rest.h5" let h5f = H5open(path, "r") h5f.visit_file() let grp = h5f["/".grp_str] for dset in grp: echo dset.name, " of shape ", dset.shape if "Reflectivity" in dset.name: let data = toTensor(h5f[dset.name, float]).asType(float32) # convert to float32 .reshape(dset.shape) .transpose # transpose our data echo data.shape let dataS = data.toSeq2D echo dataS.shape # convert to seq[array] var dataA = newSeq[array[901, float32]](500) for i in 0 ..< dataS.len: copyMem(dataA[i][0].addr, dataS[i][0].addr, 901 * sizeof(float32)) let name = dset.name dataA.save "/tmp/" & name discard h5f.close()
Let's read one of the files using nio
again to check:
import nio const Size = 901 const path = "/tmp/Reflectivity0.N901f" let fa = load[array[Size, float32]](path) echo fa import ggplotnim, sequtils block Angle: let df = toDf({"x" : toSeq(0 ..< Size), "y" : fa[0].mapIt(it.float)}) ggplot(df, aes("x", "y")) + geom_line() + ggsave("/tmp/test_angle.pdf") block Refl: var refl = newSeq[float]() var i = 0 for row in fa: refl.add row[50] inc i let df = toDf({"x" : toSeq(0 ..< fa.len), "y" : refl}) ggplot(df, aes("x", "y")) + geom_line() + ggsave("/tmp/test_refl.pdf")
1.57.
Let's finally pick up where we left off, namely finishing up the limit talk for the CAST collaboration.
UPDATE:
: While explaining things to Cristina I noticed that our assumption of the LLNL multilayer was flawed. The thesis and paper always talk about a Pt/C coating. But this actually means the carbon is at the top and not at the bottom (see fig. 4.11 in the thesis).So now I'll regenerate all the files for REST as well as the one we use and update our code.
Updated the layer in code and time to rerun:
./llnl_layer_reflectivity \ --numEnergy 500 \ --angleMax 9.0.° \ --numAngle 901 \ --outfile llnl_layer_reflectivities_rest.h5 \ --outpath /tmp/
and now to regenerate the files:
import nimhdf5, strutils, sequtils, arraymancer import nio const path = "/home/basti/org/resources/llnl_layer_reflectivities_rest.h5" let h5f = H5open(path, "r") h5f.visit_file() let grp = h5f["/".grp_str] for dset in grp: echo dset.name, " of shape ", dset.shape if "Reflectivity" in dset.name: let data = toTensor(h5f[dset.name, float]).asType(float32) # convert to float32 .reshape(dset.shape) .transpose # transpose our data echo data.shape let dataS = data.toSeq2D echo dataS.shape # convert to seq[array] var dataA = newSeq[array[901, float32]](500) for i in 0 ..< dataS.len: copyMem(dataA[i][0].addr, dataS[i][0].addr, 901 * sizeof(float32)) let name = dset.name dataA.save "/tmp/" & name discard h5f.close()
I renamed them
basti at void in /t λ mv Reflectivity0.N901f Reflectivity_Multilayer_Pt_C_LLNL_layers_1,2,3.N901f basti at void in /t λ mv Reflectivity1.N901f Reflectivity_Multilayer_Pt_C_LLNL_layers_4,5,6.N901f basti at void in /t λ mv Reflectivity2.N901f Reflectivity_Multilayer_Pt_C_LLNL_layers_7,8,9,10.N901f basti at void in /t λ mv Reflectivity3.N901f Reflectivity_Multilayer_Pt_C_LLNL_layers_11,12.N901f basti at void in /t λ cp Reflectivity_Multilayer_Pt_C_LLNL_layers_* ~/src/axionlib-data/opticsMirror/
and time to update the PR.
1.58.
For the talk about the limit calculation method I need two new plots:
- background rate with only LnL & MLP
- background rate with LnL + each different veto
and the latest numbers for the background rate achieved.
For that we need the latest MLP, from ./resources/lhood_limits_10_05_23_mlp_sEff_0.99/ (on desktop!)
First the background rate of MLP @ 99% (97% on real data) and LnL without any vetoes:
NOTE: In the naming below we use the efficiencies more closely matching the real efficiencies based on the CDL data instead of the target based on simulated events!
plotBackgroundRate \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crGold_sEff_0.99_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crGold_sEff_0.99_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold.h5 \ --names "MLP@97" --names "MLP@97" --names "LnL@80" --names "LnL@80" \ --centerChip 3 \ --title "Background rate CAST, LnL@80%, SGD tanh300 MLE MLP@97% + vetoes" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --energyMin 0.2 \ --outfile background_rate_run2_3_mlp_0.99_no_vetoes.pdf \ --outpath ~/Sync/limitMethod/ \ --quiet
[INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.2795e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.8996e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@97 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.6735e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 2.2279e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.2088e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.1044e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@97 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 4.9952e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 2.4976e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1626e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5836e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@97 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1661e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5914e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.4954e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.3982e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@97 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 9.1110e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.6444e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.6383e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.5958e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@97 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 3.2012e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 8.0029e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.6076e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.0095e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@97 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.8310e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.2887e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.1788e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.3631e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@97 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 9.6563e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.6094e-05 keV⁻¹·cm⁻²·s⁻¹
results in the plot:
and including all vetoes for both, as a reference:
plotBackgroundRate \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crGold_sEff_0.99_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crGold_sEff_0.99_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crGold_sEff_0.99_scinti_fadc_septem_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crGold_sEff_0.99_scinti_fadc_septem_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold_scinti_fadc_line_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold_scinti_fadc_line_vetoPercentile_0.99.h5 \ --names "MLP@97" --names "MLP@97" --names "MLP@97+V" --names "MLP@97+V" --names "LnL@80" --names "LnL@80" --names "LnL@80+V" --names "LnL@80+V" \ --centerChip 3 \ --title "Background rate CAST, LnL@80%, SGD tanh300 MLE MLP@97% + vetoes" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --energyMin 0.2 \ --outfile background_rate_run2_3_mlp_0.99_plus_vetoes.pdf \ --outpath ~/Sync/limitMethod/ \ --quiet
[INFO]:Dataset: LnL@80 [29/1134]
[INFO]: Integrated background rate in range: 0.2 .. 12.0: 2.2566e-04 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 12.0: 1.9124e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80+V
[INFO]: Integrated background rate in range: 0.2 .. 12.0: 1.1239e-04 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 12.0: 9.5248e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@97
[INFO]: Integrated background rate in range: 0.2 .. 12.0: 2.5838e-04 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 12.0: 2.1897e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@97+V
[INFO]: Integrated background rate in range: 0.2 .. 12.0: 1.5302e-04 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 12.0: 1.2968e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80
[INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.2088e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.1044e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80+V
[INFO]: Integrated background rate in range: 0.5 .. 2.5: 1.4071e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 7.0355e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@97
[INFO]: Integrated background rate in range: 0.5 .. 2.5: 4.9952e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 2.4976e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@97+V
[INFO]: Integrated background rate in range: 0.5 .. 2.5: 1.6182e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 8.0909e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80
[INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1626e-04 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5836e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80+V
[INFO]: Integrated background rate in range: 0.5 .. 5.0: 4.4500e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 9.8888e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@97
[INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1661e-04 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5914e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@97+V
[INFO]: Integrated background rate in range: 0.5 .. 5.0: 6.0154e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 1.3367e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80
[INFO]: Integrated background rate in range: 0.2 .. 2.5: 8.2667e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 2.5: 3.5942e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80+V
[INFO]: Integrated background rate in range: 0.2 .. 2.5: 2.0051e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 2.5: 8.7179e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@97
[INFO]: Integrated background rate in range: 0.2 .. 2.5: 8.2140e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 2.5: 3.5713e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@97+V
[INFO]: Integrated background rate in range: 0.2 .. 2.5: 2.9725e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 2.5: 1.2924e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80
[INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.6383e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.5958e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80+V
[INFO]: Integrated background rate in range: 4.0 .. 8.0: 1.3543e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 3.3858e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@97
[INFO]: Integrated background rate in range: 4.0 .. 8.0: 3.2012e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 8.0029e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@97+V
[INFO]: Integrated background rate in range: 4.0 .. 8.0: 1.9524e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 4.8809e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80
[INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.5848e-04 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 2.0317e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80+V
[INFO]: Integrated background rate in range: 0.2 .. 8.0: 6.2264e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 7.9826e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@97
[INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.7413e-04 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 2.2324e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@97+V
[INFO]: Integrated background rate in range: 0.2 .. 8.0: 8.8823e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 1.1388e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80
[INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.1788e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.3631e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80+V
[INFO]: Integrated background rate in range: 2.0 .. 8.0: 4.4324e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 7.3873e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@97
[INFO]: Integrated background rate in range: 2.0 .. 8.0: 9.6563e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.6094e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@97+V
[INFO]: Integrated background rate in range: 2.0 .. 8.0: 6.1385e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.0231e-05 keV⁻¹·cm⁻²·s⁻¹
And further for reference the MLP at 85%, which equals about the 80% of the LnL:
plotBackgroundRate \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crGold_sEff_0.85_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crGold_sEff_0.85_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.h5 \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crGold_sEff_0.85_scinti_fadc_septem_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crGold_sEff_0.85_scinti_fadc_septem_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run2_crGold_scinti_fadc_line_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_correct_duration/likelihood_cdl2018_Run3_crGold_scinti_fadc_line_vetoPercentile_0.99.h5 \ --names "MLP@80" --names "MLP@80" --names "MLP@80+V" --names "MLP@80+V" --names "LnL@80" --names "LnL@80" --names "LnL@80+V" --names "LnL@80+V" \ --centerChip 3 \ --title "Background rate CAST, LnL@80%, SGD tanh300 MLE MLP@80% + vetoes" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --energyMin 0.2 \ --outfile background_rate_run2_3_mlp_0.85_plus_vetoes.pdf \ --outpath ~/Sync/limitMethod/ \ --quiet
[INFO]:Dataset: LnL@80 [29/1264]
[INFO]: Integrated background rate in range: 0.2 .. 12.0: 2.2566e-04 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 12.0: 1.9124e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80+V
[INFO]: Integrated background rate in range: 0.2 .. 12.0: 1.1239e-04 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 12.0: 9.5248e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@80
[INFO]: Integrated background rate in range: 0.2 .. 12.0: 1.5408e-04 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 12.0: 1.3057e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@80+V
[INFO]: Integrated background rate in range: 0.2 .. 12.0: 1.0747e-04 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 12.0: 9.1074e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80
[INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.2088e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.1044e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80+V
[INFO]: Integrated background rate in range: 0.5 .. 2.5: 1.4071e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 7.0355e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@80
[INFO]: Integrated background rate in range: 0.5 .. 2.5: 1.9348e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 9.6738e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@80+V
[INFO]: Integrated background rate in range: 0.5 .. 2.5: 1.0026e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 5.0128e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80
[INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1626e-04 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5836e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80+V
[INFO]: Integrated background rate in range: 0.5 .. 5.0: 4.4500e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 9.8888e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@80
[INFO]: Integrated background rate in range: 0.5 .. 5.0: 6.2968e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 1.3993e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@80+V
[INFO]: Integrated background rate in range: 0.5 .. 5.0: 4.0454e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 8.9898e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80
[INFO]: Integrated background rate in range: 0.2 .. 2.5: 8.2667e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 2.5: 3.5942e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80+V
[INFO]: Integrated background rate in range: 0.2 .. 2.5: 2.0051e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 2.5: 8.7179e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@80
[INFO]: Integrated background rate in range: 0.2 .. 2.5: 3.1132e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 2.5: 1.3536e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@80+V
[INFO]: Integrated background rate in range: 0.2 .. 2.5: 1.5478e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 2.5: 6.7296e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80
[INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.6383e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.5958e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80+V
[INFO]: Integrated background rate in range: 4.0 .. 8.0: 1.3543e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 3.3858e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@80
[INFO]: Integrated background rate in range: 4.0 .. 8.0: 1.8292e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 4.5731e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@80+V
[INFO]: Integrated background rate in range: 4.0 .. 8.0: 1.1960e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 2.9901e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80
[INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.5848e-04 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 2.0317e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80+V
[INFO]: Integrated background rate in range: 0.2 .. 8.0: 6.2264e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 7.9826e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@80
[INFO]: Integrated background rate in range: 0.2 .. 8.0: 9.0055e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 1.1545e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@80+V
[INFO]: Integrated background rate in range: 0.2 .. 8.0: 5.5932e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 7.1708e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80
[INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.1788e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.3631e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: LnL@80+V
[INFO]: Integrated background rate in range: 2.0 .. 8.0: 4.4324e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 7.3873e-06 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@80
[INFO]: Integrated background rate in range: 2.0 .. 8.0: 6.1913e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.0319e-05 keV⁻¹·cm⁻²·s⁻¹
[INFO]:Dataset: MLP@80+V
[INFO]: Integrated background rate in range: 2.0 .. 8.0: 4.2037e-05 cm⁻²·s⁻¹
[INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 7.0062e-06 keV⁻¹·cm⁻²·s⁻¹
which then is actually slightly better than the LnL method.
The plots are also found here: ./Figs/statusAndProgress/backgroundRates/limitMethodTalk/
1.59.
Things left TODO for the limit talk:
[ ]
update raytracing image with correct new location based on median of simulation result[ ]
update systematic uncertainties?[ ]
rerun new limits with raytracing & systematics updated[X]
remove "sneak preview"[ ]
clarify likelihood space & relation to plot of likelihood histogram
1.60.
Left over todos from yesterday:
[ ]
update raytracing image with correct new location based on median of simulation result[ ]
update systematic uncertainties?[ ]
rerun new limits with raytracing & systematics updated[ ]
clarify likelihood space & relation to plot of likelihood histogram
To get started on the systematic uncertainties: The biggest one for sure is the Sun ⇔ Earth distance one at 3.3%.
The idea is to get the distance during each solar tracking, then compute the weighted mean of the distances. From that we can compute a new uncertainty based on the variation visible in the data. That should reduce the uncertainty to ~1% or so.
So first we need to get information about Sun ⇔ Earth distance at different dates. Maybe we can get a CSV file with distances for each date in the past?
NASA's Horizon system: https://ssd.jpl.nasa.gov/horizons/
The correct query is the following:
except we want it for 1 minute intervals instead of 1 hour (to have multiple data points per tracking).
If one selects not just one setting, the request contains the following things:
w: 1 format: json input: !$$SOF MAKE_EPHEM=YES COMMAND=10 EPHEM_TYPE=OBSERVER CENTER='coord@399' COORD_TYPE=GEODETIC SITE_COORD='+6.06670,+46.23330,0' START_TIME='2017-01-01' STOP_TIME='2019-12-31' STEP_SIZE='1 HOURS' QUANTITIES='3,6,10,11,13,16,20,27,30,35' REF_SYSTEM='ICRF' CAL_FORMAT='CAL' CAL_TYPE='M' TIME_DIGITS='SECONDS' ANG_FORMAT='HMS' APPARENT='AIRLESS' RANGE_UNITS='AU' SUPPRESS_RANGE_RATE='NO' SKIP_DAYLT='NO' SOLAR_ELONG='0,180' EXTRA_PREC='NO' R_T_S_ONLY='NO' CSV_FORMAT='NO' OBJ_DATA='YES'
The full API documentation can be found at: https://ssd-api.jpl.nasa.gov/doc/horizons.html
The example from the documentation:
https://ssd.jpl.nasa.gov/api/horizons.api?format=text&COMMAND='499'&OBJ_DATA='YES'&MAKE_EPHEM='YES'&EPHEM_TYPE='OBSERVER'&CENTER='500@399'&START_TIME='2006-01-01'&STOP_TIME='2006-01-20'&STEP_SIZE='1%20d'&QUANTITIES='1,9,20,23,24,29'
i.e. we simply make a GET request to https://ssd.jpl.nasa.gov/api/horizons.api with all the parameters added in key-value format.
Let's write a simple API library:
-> Now lives here: ./../CastData/ExternCode/horizonsAPI/horizonsapi.nim
import std / [strutils, strformat, httpclient, asyncdispatch, sequtils, parseutils, os, json, tables, uri] const basePath = "https://ssd.jpl.nasa.gov/api/horizons.api?" const outPath = currentSourcePath().parentDir.parentDir / "resources/" when not defined(ssl): {.error: "This module must be compiled with `-d:ssl`.".} ## See the Horizons manual for a deeper understanding of all parameters: ## https://ssd.jpl.nasa.gov/horizons/manual.html ## And the API reference: ## https://ssd-api.jpl.nasa.gov/doc/horizons.html type CommonOptionsKind = enum coFormat = "format" ## 'json', 'text' coCommand = "COMMAND" ## defines the target body! '10' = Sun, 'MB' to get a list of available targets coObjData = "OBJ_DATA" ## 'YES', 'NO' coMakeEphem = "MAKE_EPHEM" ## 'YES', 'NO' coEphemType = "EPHEM_TYPE" ## 'OBSERVER', 'VECTORS', 'ELEMENTS', 'SPK', 'APPROACH' coEmailAddr = "EMAIL_ADDR" ## Available for 'O' = 'OBSERVER', 'V' = 'VECTOR', 'E' = 'ELEMENTS' EphemerisOptionsKind = enum ## O V E eoCenter = "CENTER" ## x x x 'coord@399' = coordinate from `SiteCoord' on earth (399) eoRefPlane = "REF_PLANE" ## x x eoCoordType = "COORD_TYPE" ## x x x 'GEODETIC', 'CYLINDRICAL' eoSiteCoord = "SITE_COORD" ## x x x if GEODETIC: 'E-long, lat, h': e.g. Geneva: '+6.06670,+46.23330,0' eoStartTime = "START_TIME" ## x x x Date as 'YYYY-MM-dd' eoStopTime = "STOP_TIME" ## x x x eoStepSize = "STEP_SIZE" ## x x x '60 min', '1 HOURS', ... eoTList = "TLIST" ## x x x eoTListType = "TLIST_TYPE" ## x x x eoQuantities = "QUANTITIES" ## x !!! These are the data fields you want to get !!! eoRefSystem = "REF_SYSTEM" ## x x x eoOutUnits = "OUT_UNITS " ## x x 'KM-S', 'AU-D', 'KM-D' (length & time, D = days) eoVecTable = "VEC_TABLE " ## x eoVecCorr = "VEC_CORR " ## x eoCalFormat = "CAL_FORMAT" ## x eoCalType = "CAL_TYPE" ## x x x eoAngFormat = "ANG_FORMAT" ## x eoApparent = "APPARENT" ## x eoTimeDigits = "TIME_DIGITS" ## x x x eoTimeZone = "TIME_ZONE" ## x eoRangeUnits = "RANGE_UNITS" ## x 'AU', 'KM' eoSuppressRangeRate = "SUPPRESS_RANGE_RATE" ## x eoElevCut = "ELEV_CUT" ## x eoSkipDayLT = "SKIP_DAYLT" ## x eoSolarELong = "SOLAR_ELONG" ## x eoAirmass = "AIRMASS" ## x eoLHACutoff = "LHA_CUTOFF" ## x eoAngRateCutoff = "ANG_RATE_CUTOFF" ## x eoExtraPrec = "EXTRA_PREC" ## x eoCSVFormat = "CSV_FORMAT" ## x x x eoVecLabels = "VEC_LABELS" ## x eoVecDeltaT = "VEC_DELTA_T" ## x eoELMLabels = "ELM_LABELS " ## x eoTPType = "TP_TYPE" ## x eoRTSOnly = "R_T_S_ONLY" ## x Quantities = set[1 .. 48] ## 1. Astrometric RA & DEC ## * 2. Apparent RA & DEC ## 3. Rates; RA & DEC ## ,* 4. Apparent AZ & EL ## 5. Rates; AZ & EL ## 6. Satellite X & Y, position angle ## 7. Local apparent sidereal time ## 8. Airmass and Visual Magnitude Extinction ## 9. Visual magnitude & surface Brightness ## 10. Illuminated fraction ## 11. Defect of illumination ## 12. Satellite angle of separation/visibility code ## 13. Target angular diameter ## 14. Observer sub-longitude & sub-latitude ## 15. Sun sub-longitude & sub-latitude ## 16. Sub-Sun position angle & distance from disc center ## 17. North pole position angle & sistance from disc center ## 18. Heliocentric ecliptic longitude & latitude ## 19. Heliocentric range & range-rate ## 20. Observer range & range-rate ## 21. One-way down-leg light-time ## 22. Speed of target with respect to Sun & observer ## 23. Sun-Observer-Targ ELONGATION angle ## 24. Sun-Target-Observer ~PHASE angle ## 25. Target-Observer-Moon/Illumination% ## 26. Observer-Primary-Target angle ## 27. Position Angles; radius & -velocity ## 28. Orbit plane angle ## 29. Constellation Name ## 30. Delta-T (TDB - UT) ##,* 31. Observer-centered Earth ecliptic longitude & latitude ## 32. North pole RA & DEC ## 33. Galactic longitude and latitude ## 34. Local apparent SOLAR time ## 35. Earth->Site light-time ## > 36. RA & DEC uncertainty ## > 37. Plane-of-sky (POS) error ellipse ## > 38. Plane-of-sky (POS) uncertainty (RSS) ## > 39. Range & range-rate sigma ## > 40. Doppler/delay sigmas ## 41. True anomaly angle ##,* 42. Local apparent hour angle ## 43. PHASE angle & bisector ## 44. Apparent target-centered longitude of Sun (L_s) ##,* 45. Inertial frame apparent RA & DEC ## 46. Rates: Inertial RA & DEC ##,* 47. Sky motion: angular rate & angles ## 48. Lunar sky brightness & target visual SNR CommonOptions* = Table[CommonOptionsKind, string] EphemerisOptions* = Table[EphemerisOptionsKind, string] ## Example URL: ## https://ssd.jpl.nasa.gov/api/horizons.api?format=text&COMMAND='499'&OBJ_DATA='YES'&MAKE_EPHEM='YES'&EPHEM_TYPE='OBSERVER'&CENTER='500@399'&START_TIME='2006-01-01'&STOP_TIME='2006-01-20'&STEP_SIZE='1%20d'&QUANTITIES='1,9,20,23,24,29' proc serialize*[T: CommonOptions | EphemerisOptions](opts: T): string = # turn into seq[(string, string)] and encase values in `'` let opts = toSeq(opts.pairs).mapIt(($it[0], &"'{it[1]}'")) result = opts.encodeQuery() proc serialize*(q: Quantities): string = result = "QUANTITIES='" var i = 0 for x in q: result.add &"{x}" if i < q.card - 1: result.add "," inc i result.add "'" proc request*(cOpt: CommonOptions, eOpt: EphemerisOptions, q: Quantities): Future[string] {.async.} = var req = basePath req.add serialize(cOpt) & "&" req.add serialize(eOpt) & "&" req.add serialize(q) echo "Performing request to: ", req var client = newAsyncHttpClient() return await client.getContent(req) # let's try a simple request let comOpt = { #coFormat : "text", coMakeEphem : "YES", coCommand : "10", coEphemType : "OBSERVER" }.toTable let ephOpt = { eoCenter : "coord@399", eoStartTime : "2017-01-01", eoStopTime : "2019-12-31", eoStepSize : "1 HOURS", eoCoordType : "GEODETIC", eoSiteCoord : "+6.06670,+46.23330,0", eoCSVFormat : "YES" }.toTable var q: Quantities q.incl 20 ## Observer range! let fut = request(comOpt, ephOpt, q) ## If multiple we would `poll`! let res = fut.waitFor() echo res.parseJson.pretty() ## TODO: construct time ranges such that 1 min yields less than 90k elements ## then cover whole range # 1. iterate all elements and download files when false: var futs = newSeq[Future[string]]() for element in 1 ..< 92: futs.add downloadFile(element) echo "INFO: Downloading all files..." while futs.anyIt(not it.finished()): poll() echo "INFO: Downloading done! Writing to ", outpath var files = newSeq[string]() for fut in futs: files.add waitFor(fut) for f in files: f.extractData.writeData()
The common parameters:
Parameter Default Allowable Values/Format Description Manual format json json, text specify output format: json for JSON or text for plain-text COMMAND none see details below target search, selection, or enter user-input object mode link OBJ_DATA YES NO, YES toggles return of object summary data MAKE_EPHEM YES NO, YES toggles generation of ephemeris, if possible EPHEM_TYPE OBSERVER OBSERVER, VECTORS, ELEMENTS, SPK, APPROACH selects type of ephemeris to generate (see details below) EMAIL_ADDR none any valid email address optional; used only in the event of highly unlikely problems needing follow-up
The ephemeris parameters:
Parameter O V E Default Allowable Values/Format Description Manual CENTER x x x Geocentric see details below selects coordinate origin (observing site) link REF_PLANE x x ECLIPTIC ECLIPTIC, FRAME, BODY EQUATOR Ephemeris reference plane (can be abbreviated E, F, B, respectively) ) COORD_TYPE x x x GEODETIC GEODETIC, CYLINDRICAL selects type of user coordinates link SITE_COORD x x x '0,0,0' set coordinate triplets for COORD_TYPE link START_TIME x x x none specifies ephemeris start time link STOP_TIME x x x none specifies ephemeris stop time link STEP_SIZE x x x '60 min' see details below ephemeris output print step. Can be fixed time, uniform interval (unitless), calendar steps, or plane-of-sky angular change steps. See also TLIST alternative. link TLIST x x x none see details below list of up to 10,000 of discrete output times. Either Julian Day numbers (JD), Modified JD (MJD), or calendar dates TLIST_TYPE x x x none JD, MJD, CAL optional specification of type of time in TLIST QUANTITIES x 'A' list of desired output quantity option codes link, link REF_SYSTEM x x x ICRF ICRF, B1950 specifies reference frame for any geometric and astrometric quantities link OUT_UNITS x x KM-S KM-S, AU-D, KM-D selects output units for distance and time; for example, AU-D selects astronomical units (au) and days (d) VEC_TABLE x 3 see details below selects vector table format link VEC_CORR x NONE NONE, LT, LT+S selects level of correction to output vectors; NONE (geometric states), LT (astrometric light-time corrected states) or LT+S (astrometric states corrected for stellar aberration) CAL_FORMAT x CAL CAL, JD, BOTH selects type of date output; CAL for calendar date/time, JD for Julian Day numbers, or BOTH for both CAL and JD CAL_TYPE x x x MIXED MIXED, GREGORIAN Selects Gregorian-only calendar input/output, or mixed Julian/Gregorian, switching on 1582-Oct-5. Recognized for close-approach tables also. ANG_FORMAT x HMS HMS, DEG selects RA/DEC output format APPARENT x AIRLESS AIRLESS, REFRACTED toggles refraction correction of apparent coordinates (Earth topocentric only) TIME_DIGITS x x x MINUTES MINUTES, SECONDS, FRACSEC controls output time precision TIME_ZONE x '+00:00' specifies local civil time offset relative to UT RANGE_UNITS x AU AU, KM sets the units on range quantities output SUPPRESS_RANGE_RATE x NO NO, YES turns off output of delta-dot and rdot (range-rate) ELEV_CUT x '-90' integer [-90:90] skip output when object elevation is less than specified SKIP_DAYLT x NO NO, YES toggles skipping of print-out when daylight at CENTER SOLAR_ELONG x '0,180' sets bounds on output based on solar elongation angle AIRMASS x 38.0 select airmass cutoff; output is skipped if relative optical airmass is greater than the single decimal value specified. Note than 1.0=zenith, 38.0 ~= local-horizon. If value is set >= 38.0, this turns OFF the filtering effect. LHA_CUTOFF x 0.0 skip output when local hour angle exceeds a specified value in the domain 0.0 < X < 12.0. To restore output (turn OFF the cut-off behavior), set X to 0.0 or 12.0. For example, a cut-off value of 1.5 will output table data only when the LHA is within +/- 1.5 angular hours of zenith meridian. ANG_RATE_CUTOFF x 0.0 skip output when the total plane-of-sky angular rate exceeds a specified value EXTRA_PREC x NO NO, YES toggles additional output digits on some angles such as RA/DEC CSV_FORMAT x x x NO NO, YES toggles output of table in comma-separated value format VEC_LABELS x YES NO, YES toggles labeling of each vector component VEC_DELTA_T x NO NO, YES toggles output of the time-varying delta-T difference TDB-UT ELM_LABELS x YES NO, YES toggles labeling of each osculating element TP_TYPE x ABSOLUTE ABSOLUTE, RELATIVE determines what type of periapsis time (Tp) is returned R_T_S_ONLY x NO NO, YES toggles output only at target rise/transit/set
The quantities documentation: https://ssd.jpl.nasa.gov/horizons/manual.html#output
UPDATE: https://github.com/SciNim/horizonsAPI
We've now turned this into a small nimble package:We can now use that easily to construct the requests we need for the CAST trackings:
import horizonsapi, datamancer, times let startDate = initDateTime(01, mJan, 2017, 00, 00, 00, 00, local()) let stopDate = initDateTime(31, mDec, 2019, 23, 59, 59, 00, local()) let nMins = (stopDate - startDate).inMinutes() const blockSize = 85_000 # max line number somewhere above 90k. Do less to have some buffer let numBlocks = ceil(nMins.float / blockSize.float).int # we end up at a later date than `stopDate`, but that's fine echo numBlocks let blockDur = initDuration(minutes = blockSize) let comOpt = { #coFormat : "json", # data returned as "fake" JSON coMakeEphem : "YES", coCommand : "10", # our target is the Sun, index 10 coEphemType : "OBSERVER" }.toTable # observational parameters var ephOpt = { eoCenter : "coord@399", # observational point is a coordinate on Earth (Earth idx 399) eoStartTime : startDate.format("yyyy-MM-dd"), eoStopTime : (startDate + blockDur).format("yyyy-MM-dd"), eoStepSize : "1 MIN", # in 1 hour steps eoCoordType : "GEODETIC", eoSiteCoord : "+6.06670,+46.23330,0", # Geneva eoCSVFormat : "YES" }.toTable # data as CSV within the JSON (yes, really) var q: Quantities q.incl 20 ## Observer range! In this case range between our coordinates on Earth and target var reqs = newSeq[HorizonsRequest]() for i in 0 ..< numBlocks: # modify the start and end dates ephOpt[eoStartTime] = (startDate + i * blockDur).format("yyyy-MM-dd") ephOpt[eoStopTime] = (startDate + (i+1) * blockDur).format("yyyy-MM-dd") echo "From : ", ephOpt[eoStartTime], " to ", ephOpt[eoStopTime] reqs.add initHorizonsRequest(comOpt, ephOpt, q) let res = getResponsesSync(reqs) proc convertToDf(res: seq[HorizonsResponse]): DataFrame = result = newDataFrame() for r in res: result.add parseCsvString(r.csvData) let df = res.convertToDf().unique("Date__(UT)__HR:MN") .select(["Date__(UT)__HR:MN", "delta", "deldot"]) echo df df.writeCsv("/home/basti/org/resources/sun_earth_distance_cast_datataking.csv", precision = 16)
import ggplotnim, sequtils # 2017-Jan-01 00:00 const Format = "yyyy-MMM-dd HH:mm" var df = readCsv("/home/basti/org/resources/sun_earth_distance_cast_datataking.csv") df["min since 2017"] = toSeq(0 ..< df.len) ggplot(df, aes("min since 2017", "delta")) + geom_line() + ggtitle("Distance in AU Sun ⇔ Earth") + ggsave("/tmp/distance_sun_earth_cast_datataking.pdf")
1.61.
Continuing from yesterday (Horizons API).
With the API constructed and data available for the Sun ⇔ Earth distance for each minute during the CAST data taking period, we now need the actual start and end times of the CAST data taking campaign.
[X]
modifycast_log_reader
to output CSV / Org file of table with start / stop times & their runs
Running
./cast_log_reader \ tracking \ -p ../resources/LogFiles/tracking-logs \ --startTime 2017/01/01 \ --endTime 2018/05/01 \ --h5out ~/CastData/data/DataRuns2017_Reco.h5
and
./cast_log_reader \ tracking \ -p ../resources/LogFiles/tracking-logs \ --startTime 2018/05/01 \ --endTime 2018/12/31 \ --h5out ~/CastData/data/DataRuns2018_Reco.h5
(on voidRipper
) now produces the following two files for each H5 file:
- ./../CastData/ExternCode/TimepixAnalysis/resources/DataRuns2017_Reco_tracking_times.csv
- ./../CastData/ExternCode/TimepixAnalysis/resources/DataRuns2017_Reco_tracking_times.html
and
- ./../CastData/ExternCode/TimepixAnalysis/resources/DataRuns2018_Reco_tracking_times.csv
- ./../CastData/ExternCode/TimepixAnalysis/resources/DataRuns2018_Reco_tracking_times.html
which are the following Org table combined:
Tracking start | Tracking stop | Run |
---|---|---|
76 | ||
77 | ||
78 | ||
79 | ||
80 | ||
81 | ||
82 | ||
82 | ||
84 | ||
86 | ||
87 | ||
87 | ||
89 | ||
90 | ||
91 | ||
92 | ||
94 | ||
95 | ||
97 | ||
98 | ||
99 | ||
100 | ||
101 | ||
103 | ||
104 | ||
106 | ||
105 | ||
107 | ||
109 | ||
112 | ||
112 | ||
114 | ||
113 | ||
115 | ||
117 | ||
119 | ||
121 | ||
123 | ||
124 | ||
124 | ||
125 | ||
127 | ||
146 | ||
150 | ||
148 | ||
152 | ||
154 | ||
156 | ||
158 | ||
160 | ||
162 | ||
162 | ||
162 | ||
164 | ||
164 | ||
166 | ||
170 | ||
172 | ||
174 | ||
176 | ||
178 | ||
178 | ||
178 | ||
178 | ||
178 | ||
180 | ||
182 | ||
182 | ||
-1 | ||
-1 | ||
240 | ||
242 | ||
244 | ||
246 | ||
248 | ||
250 | ||
254 | ||
256 | ||
258 | ||
261 | ||
261 | ||
261 | ||
263 | ||
265 | ||
268 | ||
270 | ||
270 | ||
272 | ||
272 | ||
272 | ||
274 | ||
274 | ||
274 | ||
276 | ||
276 | ||
279 | ||
279 | ||
281 | ||
283 | ||
283 | ||
283 | ||
285 | ||
285 | ||
287 | ||
289 | ||
291 | ||
291 | ||
293 | ||
295 | ||
297 | ||
297 | ||
298 | ||
299 | ||
301 | ||
301 | ||
303 | ||
306 | ||
-1 | ||
-1 | ||
-1 |
1.61.1. TODO
[ ]
Update the systematics code in the limit calculation!
1.62.
Let's combine the tracking start/stop information with the Horizons API data about the Sun's location to compute:
[ ]
a plot showing trackings in the plot for the distances[ ]
the mean value of the positions during trackings and their variance / std
import ggplotnim, sequtils, times, strutils # 2017-Jan-01 00:00 const Format = "yyyy-MMM-dd HH:mm" const OrgFormat = "'<'yyyy-MM-dd ddd H:mm'>'" const p2017 = "~/CastData/ExternCode/TimepixAnalysis/resources/DataRuns2017_Reco_tracking_times.csv" const p2018 = "~/CastData/ExternCode/TimepixAnalysis/resources/DataRuns2018_Reco_tracking_times.csv" var df = readCsv("/home/basti/org/resources/sun_earth_distance_cast_datataking.csv") .mutate(f{string -> int: "Timestamp" ~ parseTime(idx("Date__(UT)__HR:MN").strip, Format, local()).toUnix.int}) proc readRuns(f: string): DataFrame = result = readCsv(f) echo result.pretty(-1) result = result .gather(["Tracking start", "Tracking stop"], "Type", "Time") echo result.pretty(-1) result = result .mutate(f{Value -> int: "Timestamp" ~ parseTime(idx("Time").toStr, OrgFormat, local()).toUnix.int}) result["delta"] = 0.0 var dfR = readRuns(p2017) dfR.add readRuns(p2018) echo dfR ggplot(df, aes("Timestamp", "delta")) + geom_line() + geom_linerange(data = dfR, aes = aes("Timestamp", y = "", yMin = 0.98, yMax = 1.02)) + ggtitle("Distance in AU Sun ⇔ Earth") + ggsave("/tmp/distance_sun_earth_with_cast_datataking.pdf")
import ggplotnim, sequtils, times, strutils, strformat # 2017-Jan-01 00:00 const Format = "yyyy-MMM-dd HH:mm" const OrgFormat = "'<'yyyy-MM-dd ddd H:mm'>'" const p2017 = "~/CastData/ExternCode/TimepixAnalysis/resources/DataRuns2017_Reco_tracking_times.csv" const p2018 = "~/CastData/ExternCode/TimepixAnalysis/resources/DataRuns2018_Reco_tracking_times.csv" var df = readCsv("/home/basti/org/resources/sun_earth_distance_cast_datataking.csv") .mutate(f{string -> int: "Timestamp" ~ parseTime(idx("Date__(UT)__HR:MN").strip, Format, local()).toUnix.int}) proc readRuns(f: string): DataFrame = result = readCsv(f) .mutate(f{string -> int: "TimestampStart" ~ parseTime(idx("Tracking start"), OrgFormat, local()).toUnix.int}) .mutate(f{string -> int: "TimestampStop" ~ parseTime(idx("Tracking stop"), OrgFormat, local()).toUnix.int}) var dfR = readRuns(p2017) dfR.add readRuns(p2018) var dfHT = newDataFrame() for tracking in dfR: let start = tracking["TimestampStart"].toInt let stop = tracking["TimestampStop"].toInt dfHT.add df.filter(f{int: `Timestamp` >= start and `Timestamp` <= stop}) dfHT["Type"] = "Trackings" df["Type"] = "HorizonsAPI" df.add dfHT let deltas = dfHT["delta", float] let meanD = deltas.mean let varD = deltas.variance let stdD = deltas.std echo "Mean distance during trackings = ", meanD echo "Variance of distance during trackings = ", varD echo "Std of distance during trackings = ", stdD # and write back the DF of the tracking positions dfHT.writeCsv("/home/basti/org/resources/sun_earth_distance_cast_solar_trackings.csv") ggplot(df, aes("Timestamp", "delta", color = "Type")) + geom_line(data = df.filter(f{`Type` == "HorizonsAPI"})) + geom_point(data = df.filter(f{`Type` == "Trackings"}), size = 1.0) + scale_x_date(isTimestamp = true, formatString = "yyyy-MM", dateSpacing = initDuration(days = 60)) + xlab("Date", rotate = -45.0, alignTo = "right", margin = 1.5) + annotate(text = &"Mean distance during trackings = {meanD:.4f}", x = 1.52e9, y = 1.0175) + annotate(text = &"Variance distance during trackings = {varD:.4g}", x = 1.52e9, y = 1.015) + annotate(text = &"Std distance during trackings = {stdD:.4f}", x = 1.52e9, y = 1.0125) + margin(bottom = 2.0) + ggtitle("Distance in AU Sun ⇔ Earth") + ggsave("/home/basti/org/Figs/statusAndProgress/systematics/sun_earth_distance_cast_solar_tracking.pdf")
Which produces the plot and yields the output:
Mean distance during trackings = 0.9891144450781392 Variance of distance during trackings = 1.399449924353128e-05 Std of distance during trackings = 0.003740922245052853
so the real mean position is about 1.1% closer than 1 AU! The standard deviation is much smaller at 0.37% especially.
The relevant section about the systematic calculation of the distance
is in sec. [BROKEN LINK: statusAndProgress.org#sec:uncertain:distance_earth_sun] in
statusAndProgress.org
.
The file ./resources/sun_earth_distance_cast_solar_trackings.csv contains the subset of the input CSV file for the actual solar trackings.
See the new subsection of the linked section in statusAndProgress
for the final numbers we now need to use for our systematics.
1.63.
[X]
push newunchained
units & tag new version[X]
mergenimhdf5
RP & tag new version[X]
Need to incorporate the new systematics[X]
update table of systematics instatusAndProgress
[X]
change default systematic
σ_s
value inmcmc_limit
Old line:σ_sig = 0.04244936953654317, ## <- is the value *without* uncertainty on signal efficiency! # 0.04692492913207222 <- incl 2%
new line:
σ_sig = 0.02724743263827172, ## <- is the value *without* uncertainty on signal efficiency!
[X]
change usage of 2% for LnL software efficiency to 1.71% inmcmc_limit
[-]
Need to adjust the flux according to the new absolute distance! -> Should we add a command line argument tomcmc_limit
that gives the distance to use in AU? NO. Done using CSV files, as there would be changes to the axion image too, which flux scaling does not reproduce! -> use 0.989AU differential solar flux CSV file![X]
Make the differential axion flux input CSV file a command line option[X]
calculate a new differential flux withreadOpacityFile
and compare it to our direct approach of scaling by1/r²
[X]
Implement AU as CL argument inreadOpacityFile
[X]
run with 1 AU as reference (storing files in
/org/resources/differential_flux_sun_earth_distance
and/org/Figs/statusAndProgress/differential_flux_sun_earth_distance
)./readOpacityFile --suffix "_1AU" --distanceSunEarth 1.0.AU
[X]
run with the correct mean distance ~0.989 AU:
./readOpacityFile --suffix "_0.989AU" --distanceSunEarth 0.9891144450781392.AU
[X]
update solar radius to correct value inreadOpacityFile
[X]
Compare the result to 1/r² expectation. We'll read the CSV files generated by
readOpacityFile
and compare the maxima:import ggplotnim let df1 = readCsv("~/org/resources/differential_flux_sun_earth_distance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_1AU.csv") .filter(f{`type` == "Total flux"}) let df2 = readCsv("~/org/resources/differential_flux_sun_earth_distance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv") .filter(f{`type` == "Total flux"}) let max1AU = df1["diffFlux", float].max let max0989AU = df2["diffFlux", float].max echo "Ratio of 1 AU to 0.989 AU = ", max0989AU / max1AU
Bang on reproduction of our 2.2% increase!
[-]
Implement X AU as command line argument intomcmc_limit
For the limit calculation we will use the 1 AU "reference flux". From there a CL argument can be used to adjust the distance. The raytraced image should not change, as only the amount of flux changes. -> Well, we do expect a small change. Because if the Sun is closer, its angular size is larger too! In that sense maybe it is better after all to just handle this by the axion image + differential flux file? -> YES, we won't implement AU scaling intomcmc_limit
[X]
add differential solar fluxes toTimepixAnalysis/resources
directory- update axion image using
[X]
correct solar radius[X]
correct sun earth distance at 0.989 AU[X]
correct median conversion point as computed numerically, namely
Mean conversion position = 0.556813 cm Median conversion position = 0.292802 cm Variance of conversion position = 0.424726 cm
from ./Doc/SolarAxionConversionPoint/axion_conversion_point.html. This corresponds to a position of:
import unchained let f = 1500.mm let xp = 0.292802.cm let d = 3.cm # detector volume height let fromFocal = (d / 2.0) - xp let imageAt = f - fromFocal echo "Image at = ", imageAt.to(mm) echo "From focal = ", fromFocal.to(mm)
which is ~1.23 cm in front of the actual focal spot.
We could use the mean, but that would be disingenuous.
Note though: compared to our original calculation of being 1.22 cm behind the window but believing the focal spot is in the readout plane, we now still gain about 5 mm towards the focal spot! That old number was
1482.2 mm
. Before runningraytracer
, make sure the config file contains:distanceDetectorXRT = 1487.93 # mm
Then run:
./raytracer \ --ignoreDetWindow \ --ignoreGasAbs \ --suffix "_1487_93_0.989AU" \ --distanceSunEarth 0.9891144450781392.AU
The produced files are found in ./resources/axion_image_2018_1487_93_0.989AU.csv and the plot: (comparing it with shows that indeed it is quite a bit smaller than our "old" input!)
With all of the above finally done, we can finally compute some expected limits for the new input files, i.e. axion image and solar flux corresponding to:
- correct conversion point based on numerical median
- correct solar radius based on SOHO
- correct mean distance to Sun
So let's run the limit calculation for the best case scenario:
mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ --years 2017 --years 2018 \ --σ_p 0.05 \ --limitKind lkMCMC \ --nmc 1000 \ --suffix=_sEff_0.95_scinti_fadc_line_mlp_tanh300_mse_epoch_485000_loss_0.0055_acc_0.9933_axion_image_1487.9_0.989AU \ --path "" \ --outpath /home/basti/org/resources/lhood_limits_03_07_23/ \ --energyMin 0.2 --energyMax 12.0 \ --axionModel /home/basti/org/resources/differential_flux_sun_earth_distance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv
For notes on the meeting with Klaus, see the next point.
1.63.1. Understanding slowness of mcmc_limit
For some reason the mcmc_limit
code is much slower now than it was
in the past. I don't understand why. In my notes further up, from
I mention that a command using the same files
as in the above snippet only takes about 20s to build the chains. Now
it takes 200-260s.
[X]
check ifsorted_seq
difference -> no[X]
check if somearraymancer
difference -> no[X]
run withseptem
veto in addition -> also as slow[X]
is it the noisy pixels? -> it seems to be using the up to date list, incl the "Deich"[X]
Numbers of noisy pixel removal logic:
Number of elements before noise filter: 25731 Number of elements after noise filter: 24418 Number of elements before noise filter: 10549 Number of elements after noise filter: 10305 [INFO]: Read a total of 34723 input clusters. And after energy filter: 20231
-> From running with septem+line veto MLP case.
Further: the HDF5 files for the Septem + Line veto case are essentially the same size as the pure Line veto case. How does that make any sense?
Let's look at the background cluster plot of the septem & line and only line veto case for MLP@95%.
plotBackgroundClusters \ ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ --zMax 30 --title "X-ray like clusters CAST MLP@95+scinti+fadc+line" \ --outpath ~/Sync/mcmc_limit_very_slow/ \ --suffix "03_07_23_mlp_0.95_scinti_fadc_line_mlp" \ --energyMax 12.0 --energyMin 0.2 \ --filterNoisyPixels
UPDATE: Could it be related to our new axion model? Does it have more energies or something, which slows down the interpolation? Seems stupid, but who knows? At least try with old axion model and see. -> No, they seem to have the same binning. Just our new one goes to 15 keV.
I let the code run over night and it yielded:
Acceptance rate: 0.2324666666666667 with last two states of chain: @[@[6.492996449371051e-22, -0.01196348404464549, -0.002164366481936349, -0.02809605322316696, -0.007979752246365442], @[1.046326715495785e-21, -0.02434126786591704, 0.0008550422211706134, -0.04539491720412565, -0.003574795727520216]] Limit at 3.453757576271354e-21 Number of candidates: 0 INFO: The integer column `Hist` has been automatically determined to be continuous. To overwrite this behavior add a `+ scale_x/y_discrete()` call to the plotting chain. Choose `x` or `y` depending on which axis this column refers to. Or apply a `factor` to the column name in the `aes` call, i.e. `aes(..., factor("Hist"), ...)`. Expected limit: 1.378909932139855e-20 85728 Generating group /ctx/axionModel datasets.nim(849) write Error: unhandled exception: Wrong input shape of data to write in `[]=` while accessing `/ctx/axionModel/type`. Given shape `@[1500]`, dataset has shape `@[1000]` [ValueError]
So outside of the fact that the code didn't even manage to save the freaking file, the limit is also completely bonkers. 1.37e-20 corresponds to something like 1e-10·1e-12, so absolutely horrible.
Something is fucked, which also will explain the slowness.
1.64.
Main priority today: understand and fix the slowness of the limit calculation.
[X]
Check how fast it is if we use the old differential solar flux:
mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ --years 2017 --years 2018 \ --σ_p 0.05 \ --limitKind lkMCMC \ --nmc 1000 \ --suffix=_sEff_0.95_scinti_fadc_line_mlp_tanh300_mse_epoch_485000_loss_0.0055_acc_0.9933_axion_image_1487.9_0.989AU \ --path "" \ --outpath /home/basti/org/resources/lhood_limits_03_07_23/ \ --energyMin 0.2 --energyMax 12.0
i.e. the same command as yesterday without the
axionModel
argument (i.e. using the default which is the old file). -> It was so ridiculously slow that I stopped after 10 minutes. What the fuck is going on.
NOTE: Just discovered something. I wanted to reboot the computer,
because maybe something is messed up. I found a candidates.pdf
in
/tmp/
that I smartly produce before starting the limit
calculation. The plot is:
As we can see the number of candidates is HUMONGOUS!!!
Is something broken with the tracking / background times?
First reboot though.
[X]
Do another start after reboot of the same command as yesterday. -> I expect this to be the same slowness. The issue will be a regression introduced for the background / tracking time logic we refactored? -> Still seems to be as slow.[X]
Checking thecandidates.pdf
for the new data run: -> Looks the same. So something is broken in that logic.[X]
Checking the background and tracking time that is assigned to
Context
:Background time = 3158.57 h Tracking time = 161.111 h
-> That also looks reasonable.
[X]
Investigate candidate drawing -> the drawing looks fine[X]
Background interpolation -> Found the culprit! We handed the inputbackgroundTime
andtracknigTime
parameters to thesetupBackgroundInterpolation
function instead of the local modified parametersbackTime
andtrackTime
! That lead these values to be-1 Hour
inside of that function causing fun side effects for the expected number of counts. That then lead to bad candidate sampling in the candidate drawing procedure (which itself looked fine).
The candidates after the fix:
Freaking hell.
Still getting things like:
Building chain of 150000 elements took 127.3677394390106 s Acceptance rate: 0.30088 with last two states of chain: @[@[9.311633190740021e-22, 0.01744901674235642, 0.00349084434202456, -0.06634240340482739, 0.03999664726123401], @[9.311633190740021e-22, 0.01744901674235642, 0.00349084434202456, -0.06634240340482739, 0.03999664726123401]] Initial chain state: @[4.668196570108809e-21, -0.3120389306029943, 0.3543889354717579, 0.286701390433319, 0.1226804125360241] Building chain of 150000 elements took 128.6130454540253 s Acceptance rate: 0.3034866666666667 with last two states of chain: @[@[3.731887947371716e-21, 0.02452035569228822, 0.000773644639561432, -0.08992991789316797, -0.0382258117838525], @[3.731887947371716e-21, 0.02452035569228822, 0.000773644639561432, -0.08992991789316797, -0.0382258117838525]] Initial chain state: @[2.660442796473178e-22, -0.2011569539539821, -0.2836544777277811, 0.02919490998034624, 0.4127775646701672] Building chain of 150000 elements took 128.8146977424622 s Acceptance rate: 0.2591533333333333 with last two states of chain: @[@[3.636435825606668e-22, -0.009764842941003157, -0.0007353516663395031, 0.03297060483409234, -0.04076920903469726], @[6.506720970027227e-22, -0.0107001279962231, -6.017950416918778e-05, 0.04780628462897407, -0.04483761760499658]] Initial chain state: @[9.722845479265146e-22, 0.3584189020390509, -0.1514954111305945, -0.03343978579815121, -0.2637922163333362] Building chain of 150000 elements took 138.6971650123596 s Acceptance rate: 0.2639666666666667 with last two states of chain: @[@[3.438289368883349e-22, -0.01304748715057187, 0.004184991829399071, -0.04636487615831818, 0.0302346566894824], @[1.541274009225669e-21, -0.02093515375501852, 0.003417056328213522, -0.04313773041382048, 0.02677047733100371]] Initial chain state: @[2.436881668011995e-21, 0.3695082702072843, 0.04051624101632562, -0.458195482427621, -0.07043128904485663] Building chain of 150000 elements took 144.5500540733337 s Acceptance rate: 0.2609733333333333 with last two states of chain: @[@[4.194422598229001e-22, -0.01096894725308242, -0.001059399554620779, -0.04838608283669801, 0.005199899235731185], @[4.194422598229001e-22, -0.01096894725308242, -0.001059399554620779, -0.04838608283669801, 0.005199899235731185]] Initial chain state: @[1.025364583527896e-21, 0.3425107036102778, 0.26050622555894, -0.1392108662060235, -0.3609805077820832] Building chain of 150000 elements took 145.2971315383911 s Acceptance rate: 0.28042 with last two states of chain: @[@[6.825664933417517
running on
mcmc_limit_calculation limit -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 -f /home/basti/
org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 --years 2017 --years 2018 --σ_p 0.05 --limitKind lkMCMC --nmc 1000 --suffix=_sEff_0.95_scinti_fadc_septem_line_mlp_tanh300_mse_epoch_485000
_loss_0.0055_acc_0.9933_axion_image_1487.9_0.989AU --path "" --outpath /home/basti/org/resources/lhood_limits_03_07_23/ --energyMin 0.2 --energyMax 12.0 --axionModel /home/basti/org/resources/differential_flux_sun_earth_distance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv
(i.e. mlp+scinti+fadc+septem+line @ 95%!)
Still way too slow!!!
[X]
Checking again with the old differential axion flux (without
--axionModel
): -> SameBuilding chain of 150000 elements took 126.3729739189148 s Acceptance rate: 0.2716533333333334 with last two states of chain: @[@[1.270974433734064e-21, -0.008762536914031409, -0.0009393362144718906, 0.08807054442679391, 0.06807056108511295], @[1.270974433734064e-21, -0.008762536914031409, -0.0009393362144718906, 0.08807054442679391, 0.06807056108511295]] Initial chain state: @[4.161681061397676e-21, 0.03937891262715859, -0.2687772585382085, 0.4510828436114304, 0.4645657545530211] Building chain of 150000 elements took 125.4626288414001 s Acceptance rate: 0.2654533333333333 with last two states of chain: @[@[6.984109549639046e-23, 0.02177393681219079, -0.0009694252520926414, -0.01536573917383219, 0.06357336308703909], @[6.984109549639046e-23, 0.02177393681219079, -0.0009694252520926414, -0.01536573917383219, 0.06357336308703909]] Initial chain state: @[2.436881668011995e-21, 0.3695082702072843, 0.04051624101632562, -0.458195482427621, -0.07043128904485663] Building chain of 150000 elements took 145.4075906276703 s Acceptance rate: 0.2648733333333333 with last two states of chain: @[@[1.854805063479706e-21, 0.02828329851122759, -0.001409250857040086, -0.07848092399906945, -0.01008439148632219], @[4.68305467951519e-22, 0.03745033276146833, 0.002680253359587353, -0.09263439093421814, -0.02574252887010509]] Initial chain state: @[1.025364583527896e-21, 0.3425107036102778, 0.26050622555894, -0.1392108662060235, -0.3609805077820832] Building chain of 150000 elements took 146.0146522521973 s Acceptance rate: 0.28422 with last two states of chain: @[@[3.023581967426795e-21, -0.09020389993493418, -0.005375722700108269, -0.009890672103045093, 0.03342292616291231], @[2.466627578743573e-21, -0.0871729066832931, 0.005329262454946779, -0.0002405123197451453, 0.03887706119504662]] Initial chain state: @[2.98188342976756e-21, 0.0
[X]
Check with old systematic value (not that I expect this to change anything):
mcmc_limit_calculation limit -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 --years 2017 --years 2018 --σ_p 0.05 --limitKind lkMCMC --nmc 1000 --suffix=_sEff_0.95_scinti_fadc_septem_line_mlp_tanh300_mse_epoch_485000_loss_0.0055_acc_0.9933_axion_image_1487.9_0.989AU --path "" --outpath /home/basti/org/resources/lhood_limits_03_07_23/ --energyMin 0.2 --energyMax 12.0 --σ_sig 0.04244936953654317
-> Same
Building chain of 150000 elements took 109.3231236934662 s Acceptance rate: 0.2735866666666666 with last two states of chain: @[@[5.061099862447965e-22, -0.02857602726297297, -0.00130806717688539, 0.03063698419159643, 0.08021558103217649], @[5.061099862447965e-22, -0.02857602726297297, -0.00130806717688539, 0.03063698419159643, 0.08021558103217649]] Initial chain state: @[4.161681061397676e-21, 0.03937891262715859, -0.2687772585382085, 0.4510828436114304, 0.4645657545530211] Building chain of 150000 elements took 141.4028820991516 s Acceptance rate: 0.2680066666666667 with last two states of chain: @[@[2.116498657020219e-22, 0.00157812420011463, -0.001191578637594618, -0.03903883316617535, 0.001184257609417868], @[2.116498657020219e-22, 0.00157812420011463, -0.001191578637594618, -0.03903883316617535, 0.001184257609417868]] Initial chain state: @[2.436881668011995e-21, 0.3695082702072843, 0.04051624101632562, -0.458195482427621, -0.07043128904485663] Building chain of 150000 elements took 142.4829633235931 s
What else:
[ ]
Run with LnL with all vetoes and see what we get:
-> Need to regenerate likelihood files to work with them in limit code due to veto config missing in old files.
1.64.1. Regenerate likelihood output files
We'll only generate a single case for now:
./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crAll \ --vetoSets "{+fkLogL, +fkFadc, +fkScinti, +fkSeptem, fkLineVeto}" \ --fadcVetoPercentile 0.99 \ --signalEfficiency 0.8 \ --out ~/org/resources/lhood_lnL_04_07_23/ \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing \ --jobs 12
After an extremely painful amount of work to get likelihood
compiled
again to fix a few small issues (i.e. depending on the cacheTab
files even when using no MLP), the files are finally there.
BUUUUUUUUUUUUUUUT I ran with the default clustering algorithm,
instead of dbscan
…
Rerunning again using dbscan
.
This brings up the question on the efficiency of the septem veto in case of the MLP though…
Renamed the files to have a _default_cluster
suffix.
Let's plot the cluster centers for the default case files:
plotBackgroundClusters \ ~/org/resources/lhood_lnL_04_07_23/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99_default_cluster.h5 \ --zMax 30 \ --title "LnL+FADC+Scinti+Septem+Line default cluster algo" \ --outpath ~/Sync/lnL_04_07_23/ \ --suffix "04_07_23_lnL_scinti_fadc_septem_line_default_cluster_R2" \ --energyMax 12.0 --energyMin 0.0 --filterNoisyPixels
plotBackgroundClusters \ ~/org/resources/lhood_lnL_04_07_23/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99_default_cluster.h5 \ --zMax 30 \ --title "LnL+FADC+Scinti+Septem+Line default cluster algo" \ --outpath ~/Sync/lnL_04_07_23/ \ --suffix "04_07_23_lnL_scinti_fadc_septem_line_default_cluster_R3" \ --energyMax 12.0 --energyMin 0.0 --filterNoisyPixels
yield
- -> 6420 clusters
- -> 3471 clusters
So 9891 clusters instead of the roughly 8900 we expect for DBSCAN. Let's check though if we reproduce those.
: Finished the dbscan run, let's look at clusters:
plotBackgroundClusters \ ~/org/resources/lhood_lnL_04_07_23/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ --zMax 30 \ --title "LnL+FADC+Scinti+Septem+Line default cluster algo" \ --outpath ~/Sync/lnL_04_07_23/ \ --suffix "04_07_23_lnL_scinti_fadc_septem_line_dbscan_R2" \ --energyMax 12.0 --energyMin 0.0 --filterNoisyPixels
plotBackgroundClusters \ ~/org/resources/lhood_lnL_04_07_23/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ --zMax 30 \ --title "LnL+FADC+Scinti+Septem+Line default cluster algo" \ --outpath ~/Sync/lnL_04_07_23/ \ --suffix "04_07_23_lnL_scinti_fadc_septem_line_dbscan_R3" \ --energyMax 12.0 --energyMin 0.0 --filterNoisyPixels
yield
- -> 6242 clusters
- -> 3388 clusters
-> Less, 9630, still more than initially assumed. Maybe due to the binning changes of the histograms going into the LnL method? Anyhow.
In the meantime let's check how slow mcmc_limit
is with the default
cluster files:
mcmc_limit_calculation \ limit \ -f ~/org/resources/lhood_lnL_04_07_23/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99_default_cluster.h5 \ -f ~/org/resources/lhood_lnL_04_07_23/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99_default_cluster.h5 \ --years 2017 --years 2018 \ --σ_p 0.05 --σ_sig 0.04244936953654317 \ --limitKind lkMCMC \ --nmc 1000 \ --suffix=_lnL_scinti_fadc_septem_line_axion_image_1487.9_0.989AU_default_cluster \ --path "" \ --outpath /home/basti/org/resources/lhood_limits_03_07_23/ \ --energyMin 0.2 --energyMax 12.0
results in times like:
Building chain of 150000 elements took 59.16278481483459 s Acceptance rate: 0.3301133333333333 with last two states of chain: @[@[3.221218754153123e-21, 0.1001582746475284, -0.0006750750804661032, -0.01909225821269168, -0.01271736094489837], @[2.557915589131322e-21, 0.09303695310023562, 0.00136396883333453, -0.00593985734664134, -0.001857904842659075]] Initial chain state: @[2.568718767500517e-21, -0.3503092502559793, 0.02507620318499509, -0.3212106439381629, -0.1823391110232517] Building chain of 150000 elements took 59.59976840019226 s Acceptance rate: 0.27792 with last two states of chain: @[@[2.43150416951269e-22, 0.04679422926871175, 0.003060880706465222, 0.005988965372613596, -0.1462321981756096], @[2.43150416951269e-22, 0.04679422926871175, 0.003060880706465222, 0.005988965372613596, -0.1462321981756096]] Building chain of 150000 elements took 59.39858341217041 s Acceptance rate: 0.2637866666666667 with last two states of chain: @[@[4.009161808514372e-21, 0.02699119698256826, 0.0008468364946590864, 0.00313360442843261, 0.03944583054445015], @[4.009161808514372e-21, 0.02699119698256826, 0.0008468364946590864, 0.00313360442843261, 0.03944583054445015]] Initial chain state: @[1.551956111345227e-21, -0.02085127777101975, 0.2274015842900468, -0.3652020071376869, 0.06496986846631414] Initial chain state: @[1.404278364950456e-22, 0.1851804887591793, -0.23513445609526, -0.4396648010325593, -0.328970832476948] Building chain of 150000 elements took 59.805743932724 s Acceptance rate: 0.2787866666666667 with last two states of chain: @[@[2.818591563704671e-21, 0.01977281003326564, -0.001144346574617646, 0.06980766970784988, -0.05324435377403436], @[2.818591563704671e-21, 0.01977281003326564, -0.001144346574617646, 0.06980766970784988, -0.05324435377403436]] Initial chain state: @[3.436712853810167e-21, 0.3576059646653303, -0.3810145277979216, -0.01900304799919095, -0.3084630290908293] Building chain of 150000 elements took 60.12974739074707 s Acceptance rate: 0.2755866666666666 with last two states of chain: @[@[1.139005379041843e-22, -0.02549423683147078, -0.0004239605850902325, -0.008100179554892915, 0.07260243062580041], @[1.139005379041843e-22, -0.02549423683147078, -0.0004239605850902325, -0.008100179554892915, 0.07260243062580041]] Initial chain state: @[3.891438297177983e-21, 0.1823988603616008, 0.06236190504128392, 0.2538882767591366, 0.3203063266792117] Building chain of 150000 elements took 61.13999462127686 s
so something is CLEARLY still amiss!
What's going on? :(
mcmc_limit_calculation \ limit \ -f ~/org/resources/lhood_lnL_04_07_23/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ -f ~/org/resources/lhood_lnL_04_07_23/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ --years 2017 --years 2018 \ --σ_p 0.05 --σ_sig 0.04244936953654317 \ --limitKind lkMCMC \ --nmc 1000 \ --suffix=_lnL_scinti_fadc_septem_line_axion_image_1487.9_0.989AU_dbscan \ --path "" \ --outpath /home/basti/org/resources/lhood_limits_03_07_23/ \ --energyMin 0.2 --energyMax 12.0
Building chain of 150000 elements took 41.53342080116272 s Acceptance rate: 0.3049866666666667 with last two states of chain: @[@[7.036224223613161e-21, 0.01614397518543687, 8.574973310970443e-05, 0.0872665054520058, 0.0804204777465774], @[7.036224223613161e-21, 0.01614397518543687, 8.574973310970443e-05, 0.0872665054520058, 0.0804204777465774]] Initial chain state: @[3.59043282379205e-21, 0.2925015742273372, -0.3931338424871418, 0.4063058330665388, 0.4222861762129114] Building chain of 150000 elements took 44.86023664474487 s Acceptance rate: 0.3131333333333333 with last two states of chain: @[@[4.971679231139565e-21, -0.03200714694510957, 0.000626967579541237, 0.06151432017642863, -0.07064431496540197], @[4.971679231139565e-21, -0.03200714694510957, 0.000626967579541237, 0.06151432017642863, -0.07064431496540197]] Initial chain state: @[4.515550746128827e-21, -0.09548612273662183, 0.2106540833085406, -0.1093334950239145, 0.3220710095688022] Building chain of 150000 elements took 53.00375294685364 s Acceptance rate: 0.4512 with last two states of chain: @[@[3.749539171666764e-21, -0.03673449807793086, -0.001626297352381822, 0.00590080323259861, 0.07538790528734959], @[3.749539171666764e-21, -0.03673449807793086, -0.001626297352381822, 0.00590080323259861, 0.07538790528734959]] Initial chain state: @[3.574065128929081e-21, 0.0482956327541732, -0.1815499308190825, -0.1561039982914719, -0.4663740396633153] Building chain of 150000 elements took 57.14369440078735 s
which is also clearly slower. This is using the OLD AXION IMAGE and old differential flux.
1.64.2. DONE Found the "bug" root cause
UPDATE -d:danger
mode! I want to cry.
After compiling correctly we get numbers like:
Building chain of 150000 elements took 2.175987958908081 s Acceptance rate: 0.3000066666666666 with last two states of chain: @[@[1.180885247697067e-21, 0.08481352589490687, 0.001453176163411386, -0.02849094252852952, -0.07246502908793442], @[1.180885247697067e-21, 0.08481352589490687, 0.001453176163411386, -0.02849094252852952, -0.07246502908793442]] Initial chain state: @[3.135081713854781e-21, -0.3830730177637535, 0.1014735248650233, -0.3582165398036626, -0.07658294956061662] Building chain of 150000 elements took 1.937663078308105 s
which is what I like to see.
This was for:
mcmc_limit_calculation \ limit \ -f ~/org/resources/lhood_lnL_04_07_23/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ -f ~/org/resources/lhood_lnL_04_07_23/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ --years 2017 --years 2018 \ --σ_p 0.05 --σ_sig 0.04244936953654317 \ --limitKind lkMCMC \ --nmc 1000 \ --suffix=_lnL_scinti_fadc_septem_line_axion_image_dbscan_old_defaults \ --path "" \ --outpath /home/basti/org/resources/lhood_limits_03_07_23/ \ --energyMin 0.2 --energyMax 12.0
which means:
- old systematics
- old axion image
- old axion flux
- lnL80+fadc+scinti+septem+line
and yielded:
Expected limit: 7.474765424923508e-21
A limit of: 8.64567257356e-23
The corresponding number from the bigger table in statusAndProgress
:
0.8 | true | true | 0.98 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5743 | 4.9249e-21 | 8.7699e-23 |
So that seems at least more or less in line with expectations. The improvement may be from our accidental energy cut?
Let's now recompile with the correct axion image and run with correct systematics and flux.
mcmc_limit_calculation \ limit \ -f ~/org/resources/lhood_lnL_04_07_23/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ -f ~/org/resources/lhood_lnL_04_07_23/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ --years 2017 --years 2018 \ --σ_p 0.05 \ --limitKind lkMCMC \ --nmc 1000 \ --suffix=_lnL_scinti_fadc_septem_line_axion_1487.93_0989AU_new_syst_dbscan \ --path "" \ --outpath /home/basti/org/resources/lhood_limits_03_07_23/ \ --energyMin 0.2 --energyMax 12.0 \ --axionModel /home/basti/org/resources/differential_flux_sun_earth_distance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv
This yields:
Expected limit: 7.336461324602653e-21
which comes out to: 8.56531454449e-23 for the limit. That's a decent improvement for not actually changing anything fundamentally!
So time to run it on the MLP:
mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ --years 2017 --years 2018 \ --σ_p 0.05 \ --limitKind lkMCMC \ --nmc 1000 \ --suffix=_sEff_0.95_scinti_fadc_line_mlp_tanh300_mse_epoch_485000_loss_0.0055_acc_0.9933_axion_image_1487.9_0.989AU \ --path "" \ --outpath /home/basti/org/resources/lhood_limits_03_07_23/ \ --energyMin 0.2 --energyMax 12.0 \ --axionModel /home/basti/org/resources/differential_flux_sun_earth_distance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv
Expected limit: 5.592200029700092e-21
Yields: 7.47810138317e-23 !!
That's a pretty good number! The last number for this setup was
7.74e-23 (see big table in statusAndProgress
).
1.64.3. Starting all limit calculations
We can now start the limit calculations again for all the settings of LnL & MLP of interest. Essentially the best of the previous limit calculation table.
Let's check. We have the MLP files in:
However, for the LnL approach we are still lacking a large number of HDF5 files that have the "correct" NN support, i.e. have the veto settings a part of the HDF5 files for easier reading etc.
So let's first regenerate all the likelihood combinations that we actually care about and then run the limits after.
Let's first consider the setups we actually want to reproduce and then about the correct calls.
The top part of the expected limits result table in
sec. ./Doc/StatusAndProgress.html of
statusAndProgress
is:
εlnL | Scinti | FADC | εFADC | Septem | Line | eccLineCut | εSeptem | εLine | εSeptemLine | Total eff. | Limit no signal | Expected Limit |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0.9 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.7587 | 3.7853e-21 | 7.9443e-23 |
0.9 | true | false | 0.98 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.7742 | 3.6886e-21 | 8.0335e-23 |
0.9 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8794 | 0.7415 | 0.7757 | 3.6079e-21 | 8.1694e-23 |
0.8 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.6744 | 4.0556e-21 | 8.1916e-23 |
0.9 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8946 | 0.7482 | 0.7891 | 3.5829e-21 | 8.3198e-23 |
0.8 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8794 | 0.7415 | 0.6895 | 3.9764e-21 | 8.3545e-23 |
0.8 | true | true | 0.9 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.6193 | 4.4551e-21 | 8.4936e-23 |
0.9 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.9076 | 0.754 | 0.8005 | 3.6208e-21 | 8.5169e-23 |
0.8 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8946 | 0.7482 | 0.7014 | 3.9491e-21 | 8.6022e-23 |
0.8 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.9076 | 0.754 | 0.7115 | 3.9686e-21 | 8.6462e-23 |
0.9 | true | false | 0.98 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.6593 | 4.2012e-21 | 8.6684e-23 |
0.7 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5901 | 4.7365e-21 | 8.67e-23 |
NOTE: These have different rows for different ε line veto cutoffs, but the table does not highlight that fact! 0.8602 corresponds to ε = 1.0, i.e. disable the cutoff. As a result we will only c
which tells us the following:
- FADC either at 99% or off
- scinti on always
- line veto always
- ε line veto cutoff disabled is best
- lnL efficiency 0.7 only without
So effectively we just want the septem & line veto combinations with 0.7, 0.8, and 0.9 software efficiency.
Also make sure the config.toml
file contains the DBSCAN
algo for
the likelihood method!
./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crAll \ --vetoSets "{+fkLogL, +fkFadc, +fkScinti, +fkSeptem, fkLineVeto, fkExclusiveLineVeto}" \ --fadcVetoPercentile 0.99 \ --signalEfficiency 0.7 --signalEfficiency 0.8 --signalEfficiency 0.9 \ --out ~/org/resources/lhood_lnL_04_07_23/ \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing \ --jobs 12
This should produce all the combinations we really care about.
--dryRun
yields:
Command: (fname: "/home/basti/CastData/data/DataRuns2017_Reco.h5", calib: "/home/basti/CastData/data/CalibrationRuns2017_Reco.h5", mlpPath: "", settings: (year: 2017, region: crAll, signalEff: 0.7, eccentricityCutoff: 1.0, vetoes: {fkLogL, fkScinti, fkFadc, fkSeptem, fkLineVeto}, vetoPercentile: 0.99)) As filename: /home/basti/org/resources/lhood_lnL_04_07_23//lhood_c18_R2_crAll_sEff_0.7_lnL_scinti_fadc_septem_line_vQ_0.99.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2017_Reco.h5", calib: "/home/basti/CastData/data/CalibrationRuns2017_Reco.h5", mlpPath: "", settings: (year: 2017, region: crAll, signalEff: 0.7, eccentricityCutoff: 1.0, vetoes: {fkLogL, fkScinti, fkFadc, fkExclusiveLineVeto}, vetoPercentile: 0.99)) As filename: /home/basti/org/resources/lhood_lnL_04_07_23//lhood_c18_R2_crAll_sEff_0.7_lnL_scinti_fadc_line_vQ_0.99.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2017_Reco.h5", calib: "/home/basti/CastData/data/CalibrationRuns2017_Reco.h5", mlpPath: "", settings: (year: 2017, region: crAll, signalEff: 0.8, eccentricityCutoff: 1.0, vetoes: {fkLogL, fkScinti, fkFadc, fkSeptem, fkLineVeto}, vetoPercentile: 0.99)) As filename: /home/basti/org/resources/lhood_lnL_04_07_23//lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2017_Reco.h5", calib: "/home/basti/CastData/data/CalibrationRuns2017_Reco.h5", mlpPath: "", settings: (year: 2017, region: crAll, signalEff: 0.8, eccentricityCutoff: 1.0, vetoes: {fkLogL, fkScinti, fkFadc, fkExclusiveLineVeto}, vetoPercentile: 0.99)) As filename: /home/basti/org/resources/lhood_lnL_04_07_23//lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_line_vQ_0.99.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2017_Reco.h5", calib: "/home/basti/CastData/data/CalibrationRuns2017_Reco.h5", mlpPath: "", settings: (year: 2017, region: crAll, signalEff: 0.9, eccentricityCutoff: 1.0, vetoes: {fkLogL, fkScinti, fkFadc, fkSeptem, fkLineVeto}, vetoPercentile: 0.99)) As filename: /home/basti/org/resources/lhood_lnL_04_07_23//lhood_c18_R2_crAll_sEff_0.9_lnL_scinti_fadc_septem_line_vQ_0.99.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2017_Reco.h5", calib: "/home/basti/CastData/data/CalibrationRuns2017_Reco.h5", mlpPath: "", settings: (year: 2017, region: crAll, signalEff: 0.9, eccentricityCutoff: 1.0, vetoes: {fkLogL, fkScinti, fkFadc, fkExclusiveLineVeto}, vetoPercentile: 0.99)) As filename: /home/basti/org/resources/lhood_lnL_04_07_23//lhood_c18_R2_crAll_sEff_0.9_lnL_scinti_fadc_line_vQ_0.99.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2018_Reco.h5", calib: "/home/basti/CastData/data/CalibrationRuns2018_Reco.h5", mlpPath: "", settings: (year: 2018, region: crAll, signalEff: 0.7, eccentricityCutoff: 1.0, vetoes: {fkLogL, fkScinti, fkFadc, fkSeptem, fkLineVeto}, vetoPercentile: 0.99)) As filename: /home/basti/org/resources/lhood_lnL_04_07_23//lhood_c18_R3_crAll_sEff_0.7_lnL_scinti_fadc_septem_line_vQ_0.99.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2018_Reco.h5", calib: "/home/basti/CastData/data/CalibrationRuns2018_Reco.h5", mlpPath: "", settings: (year: 2018, region: crAll, signalEff: 0.7, eccentricityCutoff: 1.0, vetoes: {fkLogL, fkScinti, fkFadc, fkExclusiveLineVeto}, vetoPercentile: 0.99)) As filename: /home/basti/org/resources/lhood_lnL_04_07_23//lhood_c18_R3_crAll_sEff_0.7_lnL_scinti_fadc_line_vQ_0.99.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2018_Reco.h5", calib: "/home/basti/CastData/data/CalibrationRuns2018_Reco.h5", mlpPath: "", settings: (year: 2018, region: crAll, signalEff: 0.8, eccentricityCutoff: 1.0, vetoes: {fkLogL, fkScinti, fkFadc, fkSeptem, fkLineVeto}, vetoPercentile: 0.99)) As filename: /home/basti/org/resources/lhood_lnL_04_07_23//lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2018_Reco.h5", calib: "/home/basti/CastData/data/CalibrationRuns2018_Reco.h5", mlpPath: "", settings: (year: 2018, region: crAll, signalEff: 0.8, eccentricityCutoff: 1.0, vetoes: {fkLogL, fkScinti, fkFadc, fkExclusiveLineVeto}, vetoPercentile: 0.99)) As filename: /home/basti/org/resources/lhood_lnL_04_07_23//lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_line_vQ_0.99.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2018_Reco.h5", calib: "/home/basti/CastData/data/CalibrationRuns2018_Reco.h5", mlpPath: "", settings: (year: 2018, region: crAll, signalEff: 0.9, eccentricityCutoff: 1.0, vetoes: {fkLogL, fkScinti, fkFadc, fkSeptem, fkLineVeto}, vetoPercentile: 0.99)) As filename: /home/basti/org/resources/lhood_lnL_04_07_23//lhood_c18_R3_crAll_sEff_0.9_lnL_scinti_fadc_septem_line_vQ_0.99.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2018_Reco.h5", calib: "/home/basti/CastData/data/CalibrationRuns2018_Reco.h5", mlpPath: "", settings: (year: 2018, region: crAll, signalEff: 0.9, eccentricityCutoff: 1.0, vetoes: {fkLogL, fkScinti, fkFadc, fkExclusiveLineVeto}, vetoPercentile: 0.99)) As filename: /home/basti/org/resources/lhood_lnL_04_07_23//lhood_c18_R3_crAll_sEff_0.9_lnL_scinti_fadc_line_vQ_0.99.h5
which looks fine.
It finished with:
Running all likelihood combinations took 1571.099282264709 s
Finally, let's run the expected limits for the full directory:
./runLimits \ --path ~/org/resources/lhood_lnL_04_07_23/ \ --outpath ~/org/resources/lhood_lnL_04_07_23/limits \ --prefix lhood_c18_R2_crAll \ --energyMin 0.0 --energyMax 12.0 \ --axionModel /home/basti/org/resources/differential_flux_sun_earth_distance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv \ --nmc 1000
1.65.
Last nights run of runLimits
crashed due to us compiling with
seqmath
from latest tag instead of the version that uses stop
as
the final value in linspace
if endpoint
is true.
Rerunning now.
Output from the end:
shell> Expected limit: 6.936205119829989e-21 shell> 40980 shell> Generating group /ctx/axionModel shell> Serializing Interpolator by evaluating 0.001 to 15.0 of name: axionSpl shell> Serializing Interpolator by evaluating 0.0 to 10.0 of name: efficiencySpl shell> Serializing Interpolator by evaluating 0.0 to 12.0 of name: backgroundSpl shell> Generating group /ctx/backgroundDf shell> Wrote outfile /home/basti/org/resources/lhood_lnL_04_07_23/limits/mc_limit_lkMCMC_skInterpBackground_nmc_1000_uncertainty_ukUncertain_σs_0.0323_σb_0.0028_posUncertain_puUncertain_σp_0.0500_sEff_0.9_lnL_scinti_fadc_septem_line_vQ_0.99.h5 Computing single limit took 568.60129737854 s Computing all limits took 3136.202635526657 s
Looking good!
Time to run the MLP limits. To avoid the limits taking forever to run, we will exclude the MLP only limits from ./resources/lhood_limits_10_05_23_mlp_sEff_0.99 and instead only run the combinations that have at least the line veto.
Unfortunately, the input files have the efficiency before the used
vetoes. So we cannot go by a prefix. Can we update the runLimits
to
allow a standard glob?
-> See below, glob already works for the "prefix"!
./runLimits \ --path ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/ \ --prefix "lhood_c18_R2_crAll_*_scinti_fadc_" \ --outpath ~/org/resources/lhood_MLP_05_07_23/limits \ --energyMin 0.0 --energyMax 12.0 \ --axionModel /home/basti/org/resources/differential_flux_sun_earth_distance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv \ --nmc 1000
A --dryRun
yields:
Limit calculation will be performed for the following files: /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_septem_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_septem_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.8_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.8_scinti_fadc_septem_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.8_scinti_fadc_septem_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.99_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.99_scinti_fadc_septem_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.99_scinti_fadc_septem_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.9_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.9_scinti_fadc_septem_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.9_scinti_fadc_septem_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5
which looks good!
[X]
Check if we can extend
runLimits
to allow glob to select files -> AHH I think because we usewalkFiles
that might already be supported! The main code isfor file in walkFiles(path / prefix & "*.h5"): if file.extractFilename notin alreadyProcessed: echo file else: echo "Already processed: ", file
Let's test it quickly:
import os, strutils const path = "/home/basti/org/resources/*_xray_*" for file in walkFiles(path): echo file
Yup, works perfectly!
1.65.1. MLP limit output
The limits are done
shell> Expected limit: 6.969709361359805e-21 shell> 160362 shell> Generating group /ctx/axionModel shell> Serializing Interpolator by evaluating 0.001 to 15.0 of name: axionSpl shell> Serializing Interpolator by evaluating 0.0 to 10.0 of name: efficiencySpl shell> Serializing Interpolator by evaluating 0.0 to 12.0 of name: backgroundSpl shell> Generating group /ctx/backgroundDf shell> Wrote outfile /home/basti/org/resources/lhood_MLP_05_07_23/limits/mc_limit_lkMCMC_skInterpBackground_nmc_1000_uncertainty_ukUncertain_σs_0.0374_σb_0.0028_posUncertain_puUncertain_σp_0.0500lhood_c18_R2_crAll_sEff_0.9_scinti_fadc_septem_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 Computing single limit took 2208.284529924393 s Computing all limits took 27139.7917163372 s
1.66.
[X]
Generate the expected limit table! -> UPDATED the path to the MLP files to indicate thatenergyMin
was set to 0.0!
./generateExpectedLimitsTable \ --path ~/org/resources/lhood_lnL_04_07_23/limits/ \ --prefix mc_limit_lkMCMC_skInterpBackground_nmc_1000 \ --path ~/org/resources/lhood_MLP_05_07_23_energyMin_0.0/limits/ \ --prefix mc_limit_lkMCMC_skInterpBackground_nmc_1000_uncertainty
[X]
WHY is the limit a 7.64e-23 instead of 7.46e-23?? That's what we got when we ran that case manually, no? -> RERUN the case manually. Then rerun withrunLimits
and check! UPDATE: -> I found the reason. I forgot to update theenergyMin
to0.2
and left it at 0.0!!! -> Moved the limits to ./resources/lhood_MLP_05_07_23_energyMin_0.0 from its original directory to make clear what we used!
1.67.
[X]
ImplementaxionModel
string field with filename toContext
or whatever to have it in output H5 files[X]
make used axion image a CL parameter for the limit and store the used file in the output!
Before we start rerunning the limits again with the correct minimum energy, let's implement the two TODOs.
Both implemented, time to run the limits again, this time with correct minimum energy.
./runLimits \ --path ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/ \ --prefix "lhood_c18_R2_crAll_*_scinti_fadc_" \ --outpath ~/org/resources/lhood_MLP_06_07_23/limits \ --energyMin 0.2 --energyMax 12.0 \ --axionModel /home/basti/org/resources/differential_flux_sun_earth_distance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv \ --nmc 1000
Code is running
.Back to the limit talk for now.
Limits finished
:shell> Expected limit: 6.952194554128882e-21 shell> 103176 shell> Generating group /ctx/axionModel shell> Serializing Interpolator by evaluating 0.001 to 15.0 of name: axionSpl shell> Serializing Interpolator by evaluating 0.0 to 10.0 of name: efficiencySpl shell> Serializing Interpolator by evaluating 0.2 to 12.0 of name: backgroundSpl shell> Generating group /ctx/backgroundDf shell> Wrote outfile /home/basti/org/resources/lhood_MLP_06_07_23/limits/mc_limit_lkMCMC_skInterpBackground_nmc_1000_uncertainty_ukUncertain_σs_0.0374_σb_0.0028_posUncertain_puUncertain_σp_0.0500lhood_c18_R2_crAll_sEff_0.9_scinti_fadc_septem_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 Computing single limit took 1537.849467039108 s Computing all limits took 17801.93424248695 s
Let's generate the expected result table again:
./generateExpectedLimitsTable \ --path ~/org/resources/lhood_lnL_04_07_23/limits/ \ --prefix mc_limit_lkMCMC_skInterpBackground_nmc_1000 \ --path ~/org/resources/lhood_MLP_06_07_23/limits/ \ --prefix mc_limit_lkMCMC_skInterpBackground_nmc_1000_uncertainty
1.67.1. Run the best case limit with more statistics
Let's run the best case expected limit with more statistics so that we can generate the plot of expected limits again with up to date data.
mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ --years 2017 --years 2018 \ --σ_p 0.05 \ --limitKind lkMCMC \ --nmc 30000 \ --suffix=_sEff_0.95_scinti_fadc_line_mlp_tanh300_mse_epoch_485000_loss_0.0055_acc_0.9933_axion_image_1487.9_0.989AU \ --path "" \ --outpath /home/basti/org/resources/lhood_MLP_06_07_23/ \ --energyMin 0.2 --energyMax 12.0 \ --axionModel /home/basti/org/resources/differential_flux_sun_earth_distance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv
1.68.
[X]
Check the expected limits with 30k nmc! ->
Expected limit: 5.749270497358374e-21 85728 Generating group /ctx/axionModel Serializing Interpolator by evaluating 0.001 to 15.0 of name: axionSpl Serializing Interpolator by evaluating 0.0 to 10.0 of name: efficiencySpl Serializing Interpolator by evaluating 0.2 to 12.0 of name: backgroundSpl Generating group /ctx/backgroundDf Wrote outfile /home/basti/org/resources/lhood_MLP_06_07_23/limits/mc_limit_lkMCMC_skInterpBackground_nmc_30000_uncertainty_ukUncertain_σs_0.0328_σb_0.0028_posUncertain_puUncertain_σp_0.0500_sEff_0.95_scinti_fadc_line_mlp_tanh300_mse_epoch_485000_loss_0.0055_acc_0.9933_axion_image_1487.9_0.989AU.h5
So indeed we do lose a bit more using more statistics! Unfortunate, but it is what it is I guess.
[ ]
Because of the "degradation" from 1000 to 30k toys in the 'best case' scenario, I should also rerun the next 2-3 options with more statistics to see if those expected limits might actually improve and thus give better results. As such let's rerun these other cases as well:
0.8474 MLP true true 0.98 false true 1 0.7841 0.8602 0.7325 0.7143 6.1381e-23 7.643e-23 0.9718 MLP true true 0.98 false true 1 0.7841 0.8602 0.7325 0.8192 5.8374e-23 7.6619e-23 0.9 LnL true true 0.98 false true 1 0.7841 0.8602 0.7325 0.7587 6.0434e-23 7.7375e-23 0.7926 MLP true true 0.98 false true 1 0.7841 0.8602 0.7325 0.6681 6.2843e-23 7.8575e-23 0.7398 MLP true true 0.98 false true 1 0.7841 0.8602 0.7325 0.6237 6.5704e-23 7.941e-23 i.e. the best case scenario for LnL and the other MLP cases without septem veto. I think for the MLP we can use
runLimits
to rerun all of them with more statistics. Given that the 30k for91% eff. took at least several hours (not sure how many exactly, forgot to ~time
), maybe 15k? Keeping in mind that only the ~97% case should be slower. The following command:./runLimits \ --path ~/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/ \ --prefix "lhood_c18_R2_crAll_*_scinti_fadc_line_" \ --outpath ~/org/resources/lhood_MLP_06_07_23/limits \ --energyMin 0.2 --energyMax 12.0 \ --axionModel /home/basti/org/resources/differential_flux_sun_earth_distance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv \ --nmc 15000
matches all efficiencies (note the addition of the
_line
to the prefix!). So in order to exclude running the MLP@95% case again, we'll add it to theprocessed.txt
file in the output. Note that running the above to the same output file (using--dryRun
) tells us (currently) correctly that it wouldn't do anything, because theprocessed.txt
file still contains all files.Limit calculation will be performed for the following files: Already processed: /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 Already processed: /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.8_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 Already processed: /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 Already processed: /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.99_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 Already processed: /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.9_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5
-> i.e. it wouldn't do anything. So we remove all the listed files with the exception of the
*_0.95_*
file and rerun. which now yields on a--dryRun
:Limit calculation will be performed for the following files: /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.8_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 Already processed: /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.99_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.9_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5
which looks correct! So let's run it (same command as above).
[ ]
Let's create the plot for the expected limits of 30k toy samples. We'll use the old command as a reference (from
voidRipper
.zsh_history
):: 1659807499:0;./mcmc_limit_testing limit --plotFile ~/org/resources/mc_limit_lkMCMC_skInterpBackground_nmc_100000_uncertainty_ukUncertain_\317\203\243s_0.0469_\317\203\243b_0.0028_posUncertain_puUncertain_\317\203\243p_0.0500.csv --xLow 2.5e-21 --xHigh 1.5e-20 --limitKind lkMCMC --yHigh 3000 --bins 100 --linesTo 200\ 0 --xLabel "Limit [g_ae\302\262 @ g_a\316\263 = 1e-12 GeV\342\201\273\302\271]" --yLabel "MC toy count" --nmc 100000
to construct:
NOTE: I added the option
as_gae_gaγ
to plot the histogram in theg_ae · g_aγ
space!mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ --path "" \ --years 2017 --years 2018 \ --σ_p 0.05 \ --energyMin 0.2 --energyMax 12.0 \ --plotFile "mc_limit_lkMCMC_skInterpBackground_nmc_30000_uncertainty_ukUncertain_σs_0.0328_σb_0.0028_posUncertain_puUncertain_σp_0.0500_sEff_0.95_scinti_fadc_line_mlp_tanh300_mse_epoch_485000_loss_0.0055_acc_0.9933_axion_image_1487.9_0.989AU.csv" \ --xLow 2.5e-21 \ --xHigh 1.5e-20 \ --limitKind lkMCMC \ --yHigh 600 \ --bins 100 \ --linesTo 400 \ --as_gae_gaγ \ --xLabel "Limit g_ae·g_aγ [GeV⁻¹]" \ --yLabel "MC toy count" \ --outpath "/tmp/" \ --suffix "nmc_30k_pretty" \ --nmc 30000
The resulting plot
One striking feature is the fact that there is non-zero contribution
to the region in gae² below the "limit w/o signal, only RT" case!
As it turns out this is really just due to the variation on the MCMC
method on the limit calculation! Even the no candidates case varies by
quite a bit.
This can be verified by running multiple lkMCMC
limit calculations
of the "no candidates" case. The variations are non negligible.
1.68.1. Estimating the variance of the median
[X]
Add standard deviation to expected limits table ingenerateExpectedLimitTable
!
That is the equivalent of our uncertainty on the expected limit.
-> Oh! It is not the equivalent of that. The issue with the
variance and standard deviation is that they are measures like the
mean, i.e. they take into account the absolute numbers of the
individual limits, which we don't care about.
Googling led me to:
https://en.wikipedia.org/wiki/Median_absolute_deviation
the 'Median Absolute Deviation', which is a measure for the
variability based on the median. However, to use it as a consistent
estimator of the median, we would need a scale factor \(k\)
\[
\hat{σ} = k · \text{MAD}
\]
which is distribution dependent. For generally well defined
distributions this can be looked up / computed, but our limits don't
follow a simple distribution.
Talking with BingChat then reminded me I could use bootstrapping for
this!
See [BROKEN LINK: sec:expected_limits:bootstrapping] in statusAndProgress
for
our approach.
UPDATE: The numbers seem much smaller than the change from 1k to 30k implies.
The output after the change to the exp. limit table tool:
ε | Type | Scinti | FADC | εFADC | Septem | Line | eccLineCut | εSeptem | εLine | εSeptemLine | Total eff. | Limit no signal [GeV⁻¹] | Expected limit [GeV⁻¹] | Exp. limit variance [GeV⁻¹] | Exp. limit σ [GeV⁻¹] |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.9107 | MLP | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.7677 | 5.9559e-23 | 7.4781e-23 | 1.6962e-49 | 4.1185e-25 |
0.8474 | MLP | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.7143 | 6.1381e-23 | 7.643e-23 | 2.4612e-49 | 4.9611e-25 |
0.9718 | MLP | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.8192 | 5.8374e-23 | 7.6619e-23 | 2.1702e-49 | 4.6586e-25 |
0.9 | LnL | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.7587 | 6.0434e-23 | 7.7375e-23 | 2.5765e-49 | 5.0759e-25 |
0.7926 | MLP | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6681 | 6.2843e-23 | 7.8575e-23 | 1.8431e-49 | 4.2932e-25 |
0.7398 | MLP | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6237 | 6.5704e-23 | 7.941e-23 | 1.5265e-49 | 3.907e-25 |
0.8 | LnL | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6744 | 6.3147e-23 | 8.0226e-23 | 4.4364e-49 | 6.6606e-25 |
0.9718 | MLP | true | true | 0.98 | true | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6976 | 6.2431e-23 | 8.0646e-23 | 2.0055e-49 | 4.4783e-25 |
0.9107 | MLP | true | true | 0.98 | true | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6538 | 6.432e-23 | 8.0878e-23 | 2.1584e-49 | 4.6459e-25 |
0.9718 | MLP | true | true | 0.98 | true | false | 1 | 0.7841 | 0.8602 | 0.7325 | 0.7468 | 5.9835e-23 | 8.1654e-23 | 3.2514e-49 | 5.7021e-25 |
0.9107 | MLP | true | true | 0.98 | true | false | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6998 | 6.2605e-23 | 8.2216e-23 | 1.7728e-49 | 4.2104e-25 |
0.8474 | MLP | true | true | 0.98 | true | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6083 | 6.6739e-23 | 8.2488e-23 | 2.4405e-49 | 4.9401e-25 |
0.9 | LnL | true | true | 0.98 | true | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6461 | 6.4725e-23 | 8.3284e-23 | 1.5889e-49 | 3.9861e-25 |
0.8474 | MLP | true | true | 0.98 | true | false | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6511 | 6.4585e-23 | 8.338e-23 | 1.771e-49 | 4.2083e-25 |
0.7926 | MLP | true | true | 0.98 | true | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.569 | 6.8883e-23 | 8.3784e-23 | 1.7535e-49 | 4.1875e-25 |
0.7926 | MLP | true | true | 0.98 | true | false | 1 | 0.7841 | 0.8602 | 0.7325 | 0.609 | 6.6309e-23 | 8.4116e-23 | 2.132e-49 | 4.6174e-25 |
0.8 | LnL | true | true | 0.98 | true | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.5743 | 6.8431e-23 | 8.5315e-23 | 2.8029e-49 | 5.2942e-25 |
0.8 | LnL | true | true | 0.98 | true | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.5743 | 6.875e-23 | 8.5437e-23 | 2.4348e-49 | 4.9344e-25 |
0.7398 | MLP | true | true | 0.98 | true | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.5311 | 7.1279e-23 | 8.5511e-23 | 3.5032e-49 | 5.9188e-25 |
0.7398 | MLP | true | true | 0.98 | true | false | 1 | 0.7841 | 0.8602 | 0.7325 | 0.5685 | 6.9024e-23 | 8.6142e-23 | 2.9235e-49 | 5.4069e-25 |
0.7 | LnL | true | true | 0.98 | true | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.5025 | 7.2853e-23 | 8.9271e-23 | 2.8981e-49 | 5.3834e-25 |
Let's test locally using the CSV file of the 1k sample case:
import datamancer, stats, sequtils, seqmath, ggplotnim import random template withBootstrap(rnd: var Rand, samples: seq[float], num: int, body: untyped): untyped = let N = samples.len for i in 0 ..< num: # resample var newSamples {.inject.} = newSeq[float](N) for j in 0 ..< N: newSamples[j] = samples[rnd.rand(0 ..< N)] # get an index and take its value # compute our statistics body proc expLimitVarStd(limits: seq[float], plotname: string): (float, float) = var rnd = initRand(12312) let limits = limits.mapIt(sqrt(it) * 1e-12) # rescale limits const num = 1000 var medians = newSeqOfCap[float](num) withBootstrap(rnd, limits, num): medians.add median(newSamples, 50) if plotname.len > 0: ggplot(toDf(medians), aes("medians")) + geom_histogram() + ggsave(plotname) result = (variance(medians), standardDeviation(medians)) proc expLimit(limits: seq[float]): float = let limits = limits.mapIt(sqrt(it) * 1e-12) # rescale limits result = limits.median(50) proc slice30k(limits: seq[float]) = let N = limits.len let M = N div 1000 for i in 0 ..< M: let stop = min(limits.high, (i+1) * 1000) let start = i * 1000 echo "start ", start , " to ", stop echo "Exp limit for 1k at i ", i, " = ", limits[start ..< stop].expLimit() let df1k = readCsv("~/Sync/mc_limit_lkMCMC_skInterpBackground_nmc_1000_uncertainty_ukUncertain_σs_0.0328_σb_0.0028_posUncertain_puUncertain_σp_0.0500lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.csv") let df30k = readCsv("~/Sync/mc_limit_lkMCMC_skInterpBackground_nmc_30000_uncertainty_ukUncertain_σs_0.0328_σb_0.0028_posUncertain_puUncertain_σp_0.0500_sEff_0.95_scinti_fadc_line_mlp_tanh300_mse_epoch_485000_loss_0.0055_acc_0.9933_axion_image_1487.9_0.989AU.csv") let limits30k = df30k["limits", float].toSeq1D echo "30k samples = ", expLimit(limits30k), " and std = ", expLimitVarStd(limits30k, "/tmp/medians_30k.pdf") let limits1k = df1k["limits", float].toSeq1D echo "1k samples = ", expLimit(limits1k), " and std = ", expLimitVarStd(limits1k, "/tmp/medians_1k.pdf") echo limits1k.median(50) + expLimitVarStd(limits1k, "")[1] slice30k(limits30k) let df = bind_rows([("1k", df1k), ("30k", df30k)], "Type") .filter(f{`limits` < 3e-20}) echo df["limits", float].percentile(50) ggplot(df, aes("limits", fill = "Type")) + geom_histogram(bins = 100, density = true, hdKind = hdOutline, alpha = 0.5, position = "identity") + ggsave("/tmp/histo_limits_compare_1k_30k.pdf") ggplot(df, aes("limits", fill = "Type")) + geom_density(alpha = 0.5, normalize = true) + ggsave("/tmp/kde_limits_compare_1k_30k.pdf")
30k samples = 7.582394407375754e-23 and std = (6.629501098125509e-51, 8.142174831164896e-26)
1k samples = 7.478101380514423e-23 and std = (1.613156185890611e-49, 4.016411564930331e-25)
5.592601671156511e-21
start 0 to 1000
Exp limit for 1k at i 0 = 7.605583686131433e-23
start 1000 to 2000
Exp limit for 1k at i 1 = 7.561063062729676e-23
start 2000 to 3000
Exp limit for 1k at i 2 = 7.564367446420724e-23
start 3000 to 4000
Exp limit for 1k at i 3 = 7.576524753068304e-23
start 4000 to 5000
Exp limit for 1k at i 4 = 7.571433298261475e-23
start 5000 to 6000 [0/90319]
Exp limit for 1k at i 5 = 7.627270326648243e-23
start 6000 to 7000
Exp limit for 1k at i 6 = 7.564981326101799e-23
start 7000 to 8000
Exp limit for 1k at i 7 = 7.585844150790594e-23
start 8000 to 9000
Exp limit for 1k at i 8 = 7.587466858370215e-23
start 9000 to 10000
Exp limit for 1k at i 9 = 7.596984336859885e-23
start 10000 to 11000
Exp limit for 1k at i 10 = 7.62764409158667e-23
start 11000 to 12000
Exp limit for 1k at i 11 = 7.560550411740659e-23
start 12000 to 13000
Exp limit for 1k at i 12 = 7.525828942453692e-23
start 13000 to 14000
Exp limit for 1k at i 13 = 7.549498042218461e-23
start 14000 to 15000
Exp limit for 1k at i 14 = 7.54624503868307e-23
start 15000 to 16000
Exp limit for 1k at i 15 = 7.545424145628356e-23
start 16000 to 17000
Exp limit for 1k at i 16 = 7.652870644411018e-23
start 17000 to 18000
Exp limit for 1k at i 17 = 7.562933564352857e-23
start 18000 to 19000
Exp limit for 1k at i 18 = 7.6577232551744e-23
start 19000 to 20000
Exp limit for 1k at i 19 = 7.614370346235356e-23
start 20000 to 21000
Exp limit for 1k at i 20 = 7.585288632863529e-23
start 21000 to 22000
Exp limit for 1k at i 21 = 7.520098295891504e-23
start 22000 to 23000
Exp limit for 1k at i 22 = 7.627966443034063e-23
start 23000 to 24000
Exp limit for 1k at i 23 = 7.622924220295962e-23
start 24000 to 25000
Exp limit for 1k at i 24 = 7.54129310424308e-23
start 25000 to 26000
Exp limit for 1k at i 25 = 7.566466143048985e-23
start 26000 to 27000
Exp limit for 1k at i 26 = 7.615198270864553e-23
start 27000 to 28000
Exp limit for 1k at i 27 = 7.582995326700842e-23
start 28000 to 29000
Exp limit for 1k at i 28 = 7.56983868270767e-23
start 29000 to 29999
Exp limit for 1k at i 29 = 7.630222722830585e-23
(Note: the conversion from gae² to gae gaγ is not related. The 1/100 kind of std to median remains as one would expect)
Let's also look at the 30k sample case.
But first run the generateExpectedLimitsTable
for the 30k sample
case:
./generateExpectedLimitsTable --path ~/org/resources/lhood_MLP_06_07_23/limits/ --prefix mc_limit_lkMCMC_skInterpBackground_nmc_30000_
ε | Type | Scinti | FADC | εFADC | Septem | Line | eccLineCut | εSeptem | εLine | εSeptemLine | Total eff. | Limit no signal [GeV⁻¹] | Expected limit [GeV⁻¹] | Exp. limit variance [GeV⁻¹] | Exp. limit σ [GeV⁻¹] |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.9107 | MLP | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.7677 | 5.9559e-23 | 7.5824e-23 | 6.0632e-51 | 7.7866e-26 |
Just as a cross check: the no signal expected limit is indeed the same as in the 1k case.
Hmmm, it's a bit weird that the 30k limit doesn't remotely reproduce the 1k limit even if we slice the limits into 1k pieces. I'm still not convinced that there is some difference going on.
[X]
Rerun the mcmc limit calc for the 1k case with same as before and different RNG seed! (put current limit calcs into background and run in between then!)
-> Created directory ./resources/lhood_MLP_06_07_23/limits_1k_rng_seed Let's run:
mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ -f /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ --years 2017 --years 2018 \ --σ_p 0.05 \ --limitKind lkMCMC \ --nmc 1000 \ --suffix=_sEff_0.95_scinti_fadc_line_mlp_tanh300_axion_image_1487.9_0.989AU_default_seed \ --path "" \ --outpath /home/basti/org/resources/lhood_MLP_06_07_23/limits_1k_rng_seed \ --energyMin 0.2 --energyMax 12.0 \ --axionModel /home/basti/org/resources/differential_flux_sun_earth_distance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv
Currently the RNG seed is just set via:
var nJobs = if jobs > 0: jobs else: countProcessors() - 2 if nmc < nJobs: nJobs = nmc var pp = initProcPool(limitsWorker, framesLenPfx, jobs = nJobs) var work = newSeq[ProcData]() for i in 0 ..< nJobs: work.add ProcData(id: i, nmc: max(1, nmc div nJobs))
Which is used in each worker as:
var p: ProcData while i.uRd(p): echo "Starting work for ", p, " at r = ", r, " and w = ", w var rnd = wrap(initMersenneTwister(p.id.uint32))
so we simply use the ID from 0 to nJobs. Depending on which job a process receives decides which RNG to use.
-> See the subsections below.
It really seems like the default RNG is just extremely "lucky" in this case.
Let's add these 2 new RNG cases to the script that bootstraps new medians and see what we get:
block P1000: let df = readCsv("~/Sync/mc_limit_lkMCMC_skInterpBackground_nmc_1000_uncertainty_ukUncertain_σs_0.0328_σb_0.0028_posUncertain_puUncertain_σp_0.0500_sEff_0.95_scinti_fadc_line_mlp_tanh300_axion_image_1487.9_0.989AU_default_plus_1000.csv") let limits = df["limits", float].toSeq1D echo "Plus 1000 = ", expLimit(limits), " and std = ", expLimitVarStd(limits, "/tmp/medians_p1000.pdf") block P500: let df = readCsv("~/Sync/mc_limit_lkMCMC_skInterpBackground_nmc_1000_uncertainty_ukUncertain_σs_0.0328_σb_0.0028_posUncertain_puUncertain_σp_0.0500_sEff_0.95_scinti_fadc_line_mlp_tanh300_axion_image_1487.9_0.989AU_default_plus_500.csv") let limits = df["limits", float].toSeq1D echo "Plus 500 = ", expLimit(limits), " and std = ", expLimitVarStd(limits, "/tmp/medians_p500.pdf")
which yields:
Plus 1000 = 7.584089303320828e-23 and std = (1.679933019568643e-49, 4.098698597809606e-25) Plus 500 = 7.545710186605163e-23 and std = (3.790787496584159e-49, 6.156937141618517e-25)
so both the values as well as the variation changes. But at least with these two cases the standard deviation includes the other.
So I guess the final verdict is that the numbers are mostly sensible, even if unexpected.
NOTE:
I continue running the 15k sample cases now. I recompiled the limit calculation using the default RNG again!- Result of run with default RNG
Finished around
Expected limit: 5.592200029700092e-21 85728 Generating group /ctx/axionModel Serializing Interpolator by evaluating 0.001 to 15.0 of name: axionSpl Serializing Interpolator by evaluating 0.0 to 10.0 of name: efficiencySpl Serializing Interpolator by evaluating 0.2 to 12.0 of name: backgroundSpl Generating group /ctx/backgroundDf Wrote outfile /home/basti/org/resources/lhood_MLP_06_07_23/limits_1k_rng_seed/mc_limit_lkMCMC_skInterpBackground_nmc_1000_uncertainty_ukUncertain_σs_0.0328_σb_0.0028_posUncertain_puUncertain_σp_0.0500_sEff_0.95_scinti_fadc_line_mlp_tanh300_axion_image_1487.9_0.989AU_default_seed.h5
- Result of run with default RNG + 1000
Now we modify the code to use
i + 1000
for eachProcData
id. Running nowThe suffix used is
default_plus_1000
. Finished aroundExpected limit: 5.751841473157846e-21 85728 Generating group /ctx/axionModel Serializing Interpolator by evaluating 0.001 to 15.0 of name: axionSpl Serializing Interpolator by evaluating 0.0 to 10.0 of name: efficiencySpl Serializing Interpolator by evaluating 0.2 to 12.0 of name: backgroundSpl Generating group /ctx/backgroundDf Wrote outfile /home/basti/org/resources/lhood_MLP_06_07_23/limits_1k_rng_seed/mc_limit_lkMCMC_skInterpBackground_nmc_1000_uncertainty_ukUncertain_σs_0.0328_σb_0.0028_posUncertain_puUncertain_σp_0.0500_sEff_0.95_scinti_fadc_line_mlp_tanh300_axion_image_1487.9_0.989AU_default_seed.h5
Wow, much worse!
- Result of run with default RNG + 500
Set to
i + 500
now. With suffixdefault_plus_500
Finished :Expected limit: 5.693774381122e-21 85728 Generating group /ctx/axionModel Serializing Interpolator by evaluating 0.001 to 15.0 of name: axionSpl Serializing Interpolator by evaluating 0.0 to 10.0 of name: efficiencySpl Serializing Interpolator by evaluating 0.2 to 12.0 of name: backgroundSpl Generating group /ctx/backgroundDf Wrote outfile /home/basti/org/resources/lhood_MLP_06_07_23/limits_1k_rng_seed/mc_limit_lkMCMC_skInterpBackground_nmc_1000_uncertainty_ukUncertain_σs_0.0328_σb_0.0028_posUncertain_puUncertain_σp_0.0500_sEff_0.95_scinti_fadc_line_mlp_tanh300_axion_image_1487.9_0.989AU_default_plus_500.h5
1.69.
The limits with 15k samples finished:
shell> Expected limit: 5.88262164726686e-21 shell> 65760 shell> Generating group /ctx/axionModel shell> Serializing Interpolator by evaluating 0.001 to 15.0 of name: axionSpl shell> Serializing Interpolator by evaluating 0.0 to 10.0 of name: efficiencySpl shell> Serializing Interpolator by evaluating 0.2 to 12.0 of name: backgroundSpl shell> Generating group /ctx/backgroundDf shell> Wrote outfile /home/basti/org/resources/lhood_MLP_06_07_23/limits/mc_limit_lkMCMC_skInterpBackground_nmc_15000_uncertainty_ukUncertain_σs_0.0374_σb_0.0028_posUncertain_puUncertain_σp_0.0500lhood_c18_R2_crAll_sEff_0.9_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 Computing single limit took 14433.18434882164 s Computing all limits took 72104.13822126389 s
Let's generate the expected limit table from these:
./generateExpectedLimitsTable \ --path ~/org/resources/lhood_MLP_06_07_23/limits/ \ --prefix mc_limit_lkMCMC_skInterpBackground_nmc_15000_
ε | Type | Scinti | FADC | εFADC | Septem | Line | eccLineCut | εSeptem | εLine | εSeptemLine | Total eff. | Limit no signal [GeV⁻¹] | Expected limit [GeV⁻¹] | Exp. limit variance [GeV⁻¹] | Exp. limit σ [GeV⁻¹] |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.9718 | MLP | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.8192 | 5.8374e-23 | 7.6252e-23 | 1.6405e-50 | 1.2808e-25 |
0.8474 | MLP | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.7143 | 6.1381e-23 | 7.6698e-23 | 1.4081e-50 | 1.1866e-25 |
0.7926 | MLP | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6681 | 6.2843e-23 | 7.8222e-23 | 1.3589e-50 | 1.1657e-25 |
0.7398 | MLP | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6237 | 6.5704e-23 | 7.9913e-23 | 1.6073e-50 | 1.2678e-25 |
1.70.
[ ]
For the thesis need to verify how the loss function should be defined. We'll just write a mini scrip that uses our single event
predict
function for our MLP to then look at theloss
call for that single event.Also as a file in ./Misc/inspect_mse_loss_cast.nim. Needs to be compiled with:
nim cpp -r -d:cuda inspect_mse_loss_cast.nim
and the
mlp_impl.hpp
file needs to be present in the same directory!import /home/basti/CastData/ExternCode/TimepixAnalysis/Tools/NN_playground / [nn_predict, io_helpers] import flambeau / [flambeau_raw, flambeau_nn] import nimhdf5, unchained, seqmath, stats import random from xrayAttenuation import FluorescenceLine from ingridDatabase/databaseRead import initCalibInfo import ingrid / [tos_helpers, ingrid_types, gas_physics, fake_event_generator] proc getEvents(nFake: int, calibInfo: CalibInfo, gains = @[3000.0, 4000.0], diffusion = @[550.0, 650.0]): DataFrame = var fakeDesc = FakeDesc(kind: fkGainDiffusion, gasMixture: initCASTGasMixture()) var fakeEvs = newSeqOfCap[FakeEvent](nFake) var rnd = initRand(12312) var count = 0 while count < nFake: if count mod 5000 == 0: echo "Generated ", count, " events." # 1. sample an energy let energy = rnd.rand(0.1 .. 10.0).keV let lines = @[FluorescenceLine(name: "Fake", energy: energy, intensity: 1.0)] # 2. sample a gas gain let G = rnd.gauss(mu = (gains[1] + gains[0]) / 2.0, sigma = (gains[1] - gains[0]) / 4.0) let gain = GainInfo(N: 100_000.0, G: G, theta: rnd.rand(0.4 .. 2.4)) # 3. sample a diffusion let σT = rnd.gauss(mu = 660.0, sigma = (diffusion[1] - diffusion[0] / 4.0)) fakeDesc.σT = σT let fakeEv = rnd.generateAndReconstruct(fakeDesc, lines, gain, calibInfo, energy) if not fakeEv.valid: continue fakeEvs.add fakeEv inc count result = fakeToDf( fakeEvs ) const path = "/home/basti/CastData/data/DataRuns2018_Reco.h5" const mlpPath = "/home/basti/org/resources/nn_devel_mixing/10_05_23_sgd_gauss_diffusion_tanh300_mse_loss/mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.pt" proc main = let h5f = H5open(path, "r") let calibInfo = h5f.initCalibInfo() var df = newDataFrame() for num, run in runs(h5f): echo num # read a random event df = getEvents(10, calibInfo) echo df break # initiate the MLP loadModelMakeDevice(mlpPath) df["Type"] = $dtSignal template checkIt(df: DataFrame): float = let (inp, target) = toInputTensor(df) let res = model.forward(desc, inp.to(device)) echo "Outpt: ", res let loss = mse_loss(res, target.to(device)) echo "MSE = ", loss loss.item(float) var losses = newSeq[float]() for row in rows(df): echo "===============\n" losses.add checkIt(row) discard checkIt(df) echo losses.mean main()
=============
Outpt: RawTensor 1.0000e+00 4.5948e-23 [ CUDAFloatType{1,2} ] MSE = RawTensor 1.4013e-45 [ CUDAFloatType{} ] Outpt: RawTensor 1.0000e+00 2.3029e-18 1.0000e+00 5.6112e-12 1.0000e+00 1.1757e-20 1.0000e+00 9.4252e-20 1.0000e+00 3.0507e-20 1.0000e+00 3.1307e-12 1.0000e+00 3.8549e-23 9.9358e-01 6.6054e-03 1.0000e+00 9.3695e-18 1.0000e+00 4.5948e-23 [ CUDAFloatType{10,2} ] MSE = RawTensor 4.24536e-06 [ CUDAFloatType{} ] 4.245364834787324e-06The above is the (manually copied) output of the last row & the and the batch loss + the manually computed batch loss (
losses.mean
).So as expected this means that the MSE loss computes:
\[ l(\mathbf{y}, \mathbf{\hat{y}}) = \frac{1}{N} \sum_{i = 1}^N \left( y_i - \hat{y}_i \right)² \]
where \(\mathbf{y}\) is a vector \(∈ \mathbb{R}^N}\) of the network outputs and \(\mathbf{\hat{y}}\) the target outputs. The sum runs over all \(N\) output neurons.
1.71. TODO [0/1]
IMPORTANT
1.71.1. LLNL telescope effective area [/]
I noticed today while talking with Cris about her limit calculation that
- our limit code does not actually use the
*_parallel_light.csv
file for the LLNL telescope efficiency! - uses the
_extended.csv
version - the
_extended.csv
version is outdated, because the real efficiency actually INCREASES AGAIN below 1 keV - the
_extended
and the_parallel_light
versions both describe different settings: The_extended
version comes from the CAST paper about the LLNL telescope and describes the effective area for solar axion emission in a 3 arcmin radius from the solar core, i.e. NOT parallel light!! The_parallel_light
of course describes parallel light. - because of 4, the effective area of the parallel version is
significantly higher than the
_extended
version!
⇒ What this means is we need to update our telescope efficiency for the limit! The question that remains is what is the "correct" telescope efficiency? It makes sense that the efficiency is lower for non parallel light of course. But our axion emission looks different than the assumption done for the CAST paper about the telescope!
Therefore, the best thing to do would be to use the raytracer to compute the effective area! This should actually be pretty simple! Just generate axions according to the real solar emission but different energies. I.e. for each energy in 0, 10 keV send a number N axions through the telescope. At the end just compute the average efficiency of the arriving photons (incl taking into account those that are completely lost!
This should give us a correct description for the effective area. We need to make sure of course not to include any aspects like window, conversion probability, gas etc. Only telescope reflectivity!
To compute this we need:
[ ]
correct reflectivity files for the telescope -> Need to add the other 3 recipes toxrayAttenuation
and compute it! -> See next section[ ]
add the ability to scan the effective area to theraytracer
. -> This can be done equivalent to theangularScan
that we already have there. Just need an additional energy overwrite. -> the latter can be done by having some overwrite to thegetRandomEnergyFromSolarModel
function. Maybe as an argument to thetraceAxion
procedure or similar. Or as a field toExperimentSetup
that isOption
? Or alternatively merge it into thetestXraySource
branch. Such that the X-ray source object has an overwrite for the position such that it can sample from solar model.
1.71.2. Regenerate the LLNL reflectivities using DarpanX
[X]
We're currently rerunning the DarpanX based script to get the correct reflectivites for the telescope by using Ångström as inputs instead of nano meters! :DONE:
Just by running:
./llnl_layer_reflectivity
on the HEAD of the PR https://github.com/jovoy/AxionElectronLimit/pull/22.
1.71.3. Computing the LLNL telescope reflectivities with xrayAttenuation
[ ]
Implement the depth graded layer to be computed automatically according to the equation in the DarpanX paper (and in the old paper of the old IDL program?)
A depth-graded multilayer is described by the equation: \[ d_i = \frac{a}{(b + i)^c} \] where \(d_i\) is the depth of layer \(i\) (out of \(N\) layers), \[ a = d_{\text{min}} (b + N)^c \] and \[ b = \frac{1 - N k}{k - 1} \] with \[ k = \left(\frac{d_{\text{min}}}{d_{\text{max}}}\right)^{\frac{1}{c}} \] where \(d_{\text{min}}\) and \(d_{\text{max}}\) are the thickness of the bottom and top most layers, respectively.
1.71.4. Computing the effective area
First attempt using the LLNL reflectivities from DarpanX after updating them to correct thicknesses & C/Pt instead Pt/C.
./raytracer \ --distanceSunEarth 0.9891144450781392.AU \ --effectiveAreaScanMin 0.03 \ --effectiveAreaScanMax 12.0 \ --numEffectiveAreaScanPoints 100 \ --xrayTest \ --suffix "_llnl"
with the config.toml
file containing
[TestXraySource] useConfig = false # sets whether to read these values here. Can be overriden here or useng flag `--testXray` active = true # whether the source is active (i.e. Sun or source?) sourceKind = "sun" # whether a "classical" source or the "sun" (Sun only for position *not* for energy) parallel = true
and of course the LLNL telescope as the telescope to use (plus CAST etc):
[Setup] # settings related to the setup we raytrace through experimentSetup = "CAST" # [BabyIAXO, CAST] detectorSetup = "InGrid2018" # [InGrid2017, InGrid2018, InGridIAXO] telescopeSetup = "LLNL" stageSetup = "vacuum" # [vacuum, gas]
The resulting plot is
which when compared even with the DTU thesis plot:
import ggplotnim, math, strformat, sequtils let dfParallel = readCsv("/home/basti/org/resources/llnl_xray_telescope_cast_effective_area_parallel_light_DTU_thesis.csv") let dfCast = readCsv("/home/basti/org/resources/llnl_xray_telescope_cast_effective_area.csv") let dfJaimeNature = readCsv("/home/basti/org/resources/llnl_cast_nature_jaime_data/2016_DEC_Final_CAST_XRT/EffectiveArea.txt", sep = ' ') .rename(f{"Energy[keV]" <- "E(keV)"}, f{"EffectiveArea[cm²]" <- "Area(cm^2)"}) .select("Energy[keV]", "EffectiveArea[cm²]") echo dfJaimeNature const areaBore = 2.15*2.15 * PI proc readDf(path: string): DataFrame = result = readCsv(path) if "Energy[keV]" notin result: result = result.rename(f{"Energy[keV]" <- "Energy [keV]"}, f{"Transmission" <- "relative flux"}) result = result.mutate(f{"EffectiveArea[cm²]" ~ `Transmission` * areaBore}) proc makePlot(paths, names: seq[string], suffix: string) = var dfs: seq[(string, DataFrame)] for (p, n) in zip(paths, names): let dfM = readDf(p) dfs.add (n, dfM) let df = bind_rows(concat(@[("Thesis", dfParallel), ("CASTPaper", dfCast), ("Nature", dfJaimeNature)], dfs), "Type") ggplot(df, aes("Energy[keV]", "EffectiveArea[cm²]", color = "Type")) + geom_line() + ggtitle("Effective area LLNL comparing parallel light (thesis) and axion emission (paper)") + scale_y_continuous(secAxis = sec_axis(trans = f{1.0 / areaBore}, name = "Transmission")) + margin(top = 1.5, right = 6) + legendPosition(0.8, 0.0) + ggsave(&"~/org/Figs/statusAndProgress/effectiveAreas/llnl_effective_area_comparison_parallel_axion{suffix}.pdf") proc makePlot(path, name, suffix: string) = makePlot(@[path], @[name], suffix) makePlot("/home/basti/org/resources/effectiveAreas/llnl_effective_area_manual_attempt1.csv", "Attempt1", "_attempt1") makePlot("/home/basti/org/resources/effectiveAreas/effective_area_scan_telescope_llnl_classical_3arcmin.csv", "3Arcmin", "_3arcmin") makePlot("/home/basti/org/resources/effectiveAreas/effective_area_scan_telescope_llnl_sun_reflect_squared.csv", "Rsquared", "_reflect_squared") makePlot("/home/basti/org/resources/effectiveAreas/effective_area_scan_telescope_llnl_classical_parallel_fullbore.csv", "Parallel", "_parallel") makePlot("/home/basti/org/resources/effectiveAreas/effective_area_scan_telescope_llnl_classical_parallel_fullbore_sigma_0.45.csv", "Parallel_σ0.45", "_parallel_sigma_0.45") makePlot("/home/basti/org/resources/effectiveAreas/effective_area_scan_telescope_llnl_sigma_0.45.csv", "Sun_σ0.45", "_sun_sigma_0.45") makePlot("/home/basti/org/resources/effectiveAreas/effective_area_scan_telescope_llnl_sun_correct_shells_xrayAttenuation_fixed.csv", "xrayAtten", "_sun_xray_attenuation") makePlot(@["/home/basti/org/resources/effectiveAreas/effective_area_scan_telescope_llnl_parallel_correct_shells.csv", "/home/basti/org/resources/effectiveAreas/effective_area_scan_telescope_llnl_sun_correct_shells.csv"], @["Parallel", "Sun"], "_sun_and_parallel_correct_shells_sigma_0.45") makePlot(@["/home/basti/org/resources/effectiveAreas/effective_area_scan_telescope_llnl_sun_correct_shells_xrayAttenuation_fixed.csv", "/home/basti/org/resources/effectiveAreas/effective_area_scan_telescope_llnl_3arcmin_xrayAttenuation.csv", "/home/basti/org/resources/effectiveAreas/effective_area_scan_telescope_llnl_parallel_xrayAttenuation.csv"], @["XASun", "XA3arcmin", "XAParallel"], "_sun_and_3arcmin_and_parallel_xrayAttenuation")
Hmm!
Where do we go wrong?
Let's try with a 3 arcmin source as mentioned in the caption of the CAST paper about the LLNL telescope. We need to get a 3 arc min source described by
distance = 2000.0 # mm Distance of the X-ray source from the readout radius = 21.5 # mm Radius of the X-ray source
in the config file:
import unchained, math const size = 3.arcmin / 2.0 # (radius not diameter!) const dist = 9.26.m echo "Required radius = ", (tan(size.to(Radian)) * dist).to(mm)
The 3 arc minute case does indeed lower the reflectivity compared to our solar axion emission case. However, it is still larger than what we would expect (about the height of the DTU PhD thesis parallel)
See the figure Again pretty bizarre that this version is relatively close to the PhD thesis one, when that one uses parallel light.
The DTU PhD thesis mentions that the effective area uses the reflectivity squared
optic multiplied by reflectivity squared for each layer
so I tried to change the reflectivity in the raytracer to be squared (which seems ridiculous, because what I assume is meant is that he's referring to the reflectivity of the Fresnell equations, which need to be squared to get the physical reflectivity).
This yields
The really curious thing about this is though that the behavior is now almost perfect within that dip at about 2 keV compared to the CAST LLNL paper line!
But still, I think this is the wrong approach. I tried it as well using fully parallel light using the squared reflectivity and it is comparable as expected. So in particular at high energies he suppression due to squaring is just too strong.
Ahh, in sec. 1.1.1 of the thesis he states:
However, the process becomes a little more complicated considering that the reflectivity is dependent on incident angle on the reflecting surface. In an X-ray telescope consisting of concentric mirror shells, each mirror shell will reflect incoming photons at a different angle that each result in a certain reflectivity spectrum. Also to consider is the fact that Wolter I telescopes requires a double reflection, so the reflectivity spectrum should be squared.
so what he means is really the reflectivity, but not in terms of what I assumed above, but rather due to the fact that the telescope consists of 2 separate sets of mirrors!
This is of course handled in our raytracing, due to the simulated double reflection from each layer.
Maybe the reason is surface roughness after all?
The SigmaValue
in DarpanX gives the surface roughness in
Ångström. We currently use 1 Å as the value, which is 0.1 nm. The PhD
thesis states that a surface roughness of 0.45 nm (page 89):
Both SPO substrates and NuSTAR glass substrates have a surface roughness of σrms ≈ 0.45 nm
Let's recompute the reflectivities using 4.5 Å!
See the generated plots:
So the results effectively don't seem to change.
But first let's rescale the parallel light case with σ = 0.45 Å to the PhD thesis data and then see if they at least follow the same curves:
proc makeRescaledPlot(path, name, suffix: string) = let dfPMax = dfParallel.filter(f{idx("Energy[keV]") > 1.0 and idx("Energy[keV]") < 2.