WolframAlpha.com
WolframCloud.com
All Sites & Public Resources...
Products & Services
Wolfram|One
Mathematica
Wolfram|Alpha Notebook Edition
Finance Platform
System Modeler
Wolfram Player
Wolfram Engine
WolframScript
Enterprise Private Cloud
Application Server
Enterprise Mathematica
Wolfram|Alpha Appliance
Enterprise Solutions
Corporate Consulting
Technical Consulting
Wolfram|Alpha Business Solutions
Resource System
Data Repository
Neural Net Repository
Function Repository
Wolfram|Alpha
Wolfram|Alpha Pro
Problem Generator
API
Data Drop
Products for Education
Mobile Apps
Wolfram Player
Wolfram Cloud App
Wolfram|Alpha for Mobile
Wolfram|Alpha-Powered Apps
Services
Paid Project Support
Wolfram U
Summer Programs
All Products & Services »
Technologies
Wolfram Language
Revolutionary knowledge-based programming language.
Wolfram Cloud
Central infrastructure for Wolfram's cloud products & services.
Wolfram Science
Technology-enabling science of the computational universe.
Wolfram Notebooks
The preeminent environment for any technical workflows.
Wolfram Engine
Software engine implementing the Wolfram Language.
Wolfram Natural Language Understanding System
Knowledge-based broadly deployed natural language.
Wolfram Data Framework
Semantic framework for real-world data.
Wolfram Universal Deployment System
Instant deployment across cloud, desktop, mobile, and more.
Wolfram Knowledgebase
Curated computable knowledge powering Wolfram|Alpha.
All Technologies »
Solutions
Engineering, R&D
Aerospace & Defense
Chemical Engineering
Control Systems
Electrical Engineering
Image Processing
Industrial Engineering
Mechanical Engineering
Operations Research
More...
Finance, Statistics & Business Analysis
Actuarial Sciences
Bioinformatics
Data Science
Econometrics
Financial Risk Management
Statistics
More...
Education
All Solutions for Education
Trends
Machine Learning
Multiparadigm Data Science
Internet of Things
High-Performance Computing
Hackathons
Software & Web
Software Development
Authoring & Publishing
Interface Development
Web Development
Sciences
Astronomy
Biology
Chemistry
More...
All Solutions »
Learning & Support
Learning
Wolfram Language Documentation
Fast Introduction for Programmers
Wolfram U
Videos & Screencasts
Wolfram Language Introductory Book
Webinars & Training
Summer Programs
Books
Need Help?
Support FAQ
Wolfram Community
Contact Support
Premium Support
Paid Project Support
Technical Consulting
All Learning & Support »
Company
About
Company Background
Wolfram Blog
Events
Contact Us
Work with Us
Careers at Wolfram
Internships
Other Wolfram Language Jobs
Initiatives
Wolfram Foundation
MathWorld
Computer-Based Math
A New Kind of Science
Wolfram Technology for Hackathons
Student Ambassador Program
Wolfram for Startups
Demonstrations Project
Wolfram Innovator Awards
Wolfram + Raspberry Pi
Summer Programs
More...
All Company »
Search
WOLFRAM COMMUNITY
Connect with users of Wolfram technologies to learn, solve problems and share ideas
Join
Sign In
Dashboard
Groups
People
Message Boards
Answer
(
Unmark
)
Mark as an Answer
GROUPS:
Astronomy
Data Science
External Programs and Systems
Wolfram Language
Wolfram Summer School
Neural Networks
3
Chandramauli Agrawal
[WIS22] Neural network to study cosmic air showers
Chandramauli Agrawal, Indian Institute of Science Education and Research Bhopal
Posted
7 months ago
899 Views
|
0 Replies
|
3 Total Likes
Follow this post
|
Neural network to study cosmic air showers
by
Chandramauli Agrawal
Indian Institute of Science Education and Research Bhopal
Cosmic Air showers are atmospheric phenomena, which occurs when a cosmic ray hits the upper level of the atmosphere. This initial collision sends a cascade of events downstream into the atmosphere, which can be detected.
The project is inspired from a paper (link in reference) discusses how a air shower characteristics can be found out using a trained network. The project uses a preexisting data, understands it, and train a neural network. The shower property in focus is the maximum penetration of the cosmic ray.
What are Cosmic Air Showers?
An air shower is cascade of ionized particles and EM waves produced when a cosmic ray (produced in space) enters the atmosphere.
Image Source: See references
The cosmic ray particle can be a proton, an electron, nucleus, a photon or sometimes even positron, strikes an atom’s nucleus in the air producing many energetic hadrons, which ultimately decays to produce other particles and EM radiations.
Importing the Dataset in Mathematica
The data for the project is obtained from the following link: https://desycloud.desy.de/index.php/s/YHa79Gx94CbPx8Q/download.
The downloaded file is in NPZ format, i.e. it is a Zip file containing multiple “.npy” files.
The downloaded file named:
4_airshower_100k_regression.npz
was first extracted and it was found to have contained
8
“.npy” files namely:
1
.
X_train_0.npy
2
.
X_train_1.npy
3
.
X_test_0.npy
4
.
X_test_1.npy
5
.
X_train_2.npy
6
.
X_test_2.npy
7
.
y_train.npy
8
.
y_test.npy
Note:
All the green highlighted files were the input, blue-the coordinates and red were the ouput to our neural network.
Mathematica does not directly import “.npy” files so we came up with two ways to do it.
Method-2 is superior
due to the over-committing memory issues arising with Method-1.
Method-1: Using ExternalEvaluate
As the name suggests ExternalEvaluate function allows evaluation of an expression using some outer environment (non-Mathematica), in this case Python.
Initiating:
In our earlier workings we were facing issues with Python version 3.9.1
To overcome them, we used an old version (Python 3.6.13). First starting with registering the path to our required version environment.
I
n
[
]
:
=
R
e
g
i
s
t
e
r
E
x
t
e
r
n
a
l
E
v
a
l
u
a
t
o
r
[
"
P
y
t
h
o
n
"
,
"
C
:
\
U
s
e
r
s
\
c
h
a
n
d
\
a
n
a
c
o
n
d
a
3
\
e
n
v
s
\
p
y
t
h
o
n
\
p
y
t
h
o
n
.
e
x
e
"
]
O
u
t
[
]
=
9
4
f
e
5
9
e
4
-
e
2
1
b
-
2
8
5
3
-
8
5
a
e
-
5
0
8
c
4
9
a
c
2
d
4
c
I
n
[
]
:
=
F
i
n
d
E
x
t
e
r
n
a
l
E
v
a
l
u
a
t
o
r
s
[
"
P
y
t
h
o
n
"
]
O
u
t
[
]
=
S
y
s
t
e
m
V
e
r
s
i
o
n
T
a
r
g
e
t
+
R
e
g
i
s
t
e
r
e
d
6
0
3
3
9
d
f
f
-
7
d
f
e
-
e
d
7
0
-
a
c
9
b
-
2
6
1
e
a
e
8
f
f
c
3
a
P
y
t
h
o
n
3
.
9
.
1
C
:
\
U
s
e
r
s
\
c
h
a
n
d
\
A
p
p
D
a
t
a
\
L
o
c
a
l
\
P
r
o
g
r
a
m
s
\
P
y
t
h
o
n
\
P
y
t
h
o
n
3
9
\
p
y
t
h
o
n
.
e
x
e
•
A
u
t
o
m
a
t
i
c
9
4
f
e
5
9
e
4
-
e
2
1
b
-
2
8
5
3
-
8
5
a
e
-
5
0
8
c
4
9
a
c
2
d
4
c
P
y
t
h
o
n
3
.
6
.
1
3
C
:
\
U
s
e
r
s
\
c
h
a
n
d
\
a
n
a
c
o
n
d
a
3
\
e
n
v
s
\
p
y
t
h
o
n
\
p
y
t
h
o
n
.
e
x
e
•
T
r
u
e
I
n
[
]
:
=
p
y
=
S
t
a
r
t
E
x
t
e
r
n
a
l
S
e
s
s
i
o
n
[
<
|
"
S
y
s
t
e
m
"
"
P
y
t
h
o
n
"
,
"
V
e
r
s
i
o
n
"
"
3
.
6
.
1
3
"
|
>
]
O
u
t
[
]
=
E
x
t
e
r
n
a
l
S
e
s
s
i
o
n
O
b
j
e
c
t
S
y
s
t
e
m
:
P
y
t
h
o
n
K
e
r
n
e
l
:
3
.
6
.
1
3
U
U
I
D
:
9
0
5
0
4
b
c
c
-
6
a
6
b
-
4
f
e
4
-
8
8
4
1
-
e
3
0
8
9
b
d
3
c
e
7
7
Input (Training and Testing):
We then started bringing the data on Mathematica.
Considering the quantity of data, we will only deal with limited number of data points. We took,
I
n
[
]
:
=
s
i
z
e
T
r
a
i
n
=
7
0
0
;
s
i
z
e
T
e
s
t
=
3
0
0
;
I
n
[
]
:
=
t
e
m
p
=
E
x
t
e
r
n
a
l
E
v
a
l
u
a
t
e
[
p
y
,
"
i
m
p
o
r
t
n
u
m
p
y
a
s
n
p
n
p
.
l
o
a
d
(
r
'
C
:
\
\
U
s
e
r
s
\
\
c
h
a
n
d
\
\
D
e
s
k
t
o
p
\
\
W
o
l
f
r
a
m
W
i
n
t
e
r
S
c
h
o
o
l
\
\
D
a
t
a
s
e
t
\
\
X
_
t
r
a
i
n
_
0
.
n
p
y
'
)
"
]
;
x
t
r
a
i
n
0
=
t
e
m
p
[
[
1
;
;
s
i
z
e
T
r
a
i
n
,
A
l
l
,
A
l
l
,
A
l
l
]
]
;
C
l
e
a
r
A
l
l
[
t
e
m
p
]
;
t
e
m
p
=
E
x
t
e
r
n
a
l
E
v
a
l
u
a
t
e
[
p
y
,
"
i
m
p
o
r
t
n
u
m
p
y
a
s
n
p
n
p
.
l
o
a
d
(
r
'
C
:
\
\
U
s
e
r
s
\
\
c
h
a
n
d
\
\
D
e
s
k
t
o
p
\
\
W
o
l
f
r
a
m
W
i
n
t
e
r
S
c
h
o
o
l
\
\
D
a
t
a
s
e
t
\
\
X
_
t
r
a
i
n
_
1
.
n
p
y
'
)
"
]
x
t
r
a
i
n
1
=
t
e
m
p
[
[
1
;
;
s
i
z
e
T
r
a
i
n
,
A
l
l
,
A
l
l
,
1
]
]
;
C
l
e
a
r
A
l
l
[
t
e
m
p
]
;
t
e
m
p
=
E
x
t
e
r
n
a
l
E
v
a
l
u
a
t
e
[
p
y
,
"
n
p
.
l
o
a
d
(
r
'
C
:
\
\
U
s
e
r
s
\
\
c
h
a
n
d
\
\
D
e
s
k
t
o
p
\
\
W
o
l
f
r
a
m
W
i
n
t
e
r
S
c
h
o
o
l
\
\
D
a
t
a
s
e
t
\
\
X
_
t
e
s
t
_
0
.
n
p
y
'
)
"
]
x
t
e
s
t
0
=
t
e
m
p
[
[
1
;
;
s
i
z
e
T
e
s
t
,
A
l
l
,
A
l
l
,
A
l
l
]
]
;
C
l
e
a
r
A
l
l
[
t
e
m
p
]
;
t
e
m
p
=
E
x
t
e
r
n
a
l
E
v
a
l
u
a
t
e
[
p
y
,
"
n
p
.
l
o
a
d
(
r
'
C
:
\
\
U
s
e
r
s
\
\
c
h
a
n
d
\
\
D
e
s
k
t
o
p
\
\
W
o
l
f
r
a
m
W
i
n
t
e
r
S
c
h
o
o
l
\
\
D
a
t
a
s
e
t
\
\
X
_
t
e
s
t
_
1
.
n
p
y
'
)
"
]
x
t
e
s
t
1
=
t
e
m
p
[
[
1
;
;
s
i
z
e
T
e
s
t
,
A
l
l
,
A
l
l
,
1
]
]
;
C
l
e
a
r
A
l
l
[
t
e
m
p
]
;
Detector Coordinates:
I
n
[
]
:
=
x
t
r
a
i
n
2
=
E
x
t
e
r
n
a
l
E
v
a
l
u
a
t
e
[
"
P
y
t
h
o
n
"
,
"
i
m
p
o
r
t
n
u
m
p
y
a
s
n
p
n
p
.
l
o
a
d
(
r
'
C
:
\
\
U
s
e
r
s
\
\
c
h
a
n
d
\
\
D
e
s
k
t
o
p
\
\
W
o
l
f
r
a
m
W
i
n
t
e
r
S
c
h
o
o
l
\
\
D
a
t
a
s
e
t
\
\
X
_
t
r
a
i
n
_
2
.
n
p
y
'
)
"
]
O
u
t
[
]
=
N
u
m
e
r
i
c
A
r
r
a
y
T
y
p
e
:
R
e
a
l
6
4
D
i
m
e
n
s
i
o
n
s
:
{
8
1
,
3
}
I
n
[
]
:
=
x
t
e
s
t
2
=
E
x
t
e
r
n
a
l
E
v
a
l
u
a
t
e
[
p
y
,
"
i
m
p
o
r
t
n
u
m
p
y
a
s
n
p
n
p
.
l
o
a
d
(
r
'
C
:
\
\
U
s
e
r
s
\
\
c
h
a
n
d
\
\
D
e
s
k
t
o
p
\
\
W
o
l
f
r
a
m
W
i
n
t
e
r
S
c
h
o
o
l
\
\
D
a
t
a
s
e
t
\
\
X
_
t
e
s
t
_
2
.
n
p
y
'
)
"
]
O
u
t
[
]
=
N
u
m
e
r
i
c
A
r
r
a
y
T
y
p
e
:
R
e
a
l
6
4
D
i
m
e
n
s
i
o
n
s
:
{
8
1
,
3
}
In the given data, both the coordinates for training and testing were found to be same. We can, hence, essentially only use one for our purposes.
I
n
[
]
:
=
x
t
r
a
i
n
2
x
t
e
s
t
2
O
u
t
[
]
=
T
r
u
e
Output (Training and Testing):
t
e
m
p
=
E
x
t
e
r
n
a
l
E
v
a
l
u
a
t
e
[
p
y
,
"
i
m
p
o
r
t
n
u
m
p
y
a
s
n
p
n
p
.
l
o
a
d
(
r
'
C
:
\
\
U
s
e
r
s
\
\
c
h
a
n
d
\
\
D
e
s
k
t
o
p
\
\
W
o
l
f
r
a
m
W
i
n
t
e
r
S
c
h
o
o
l
\
\
D
a
t
a
s
e
t
\
\
y
_
t
r
a
i
n
.
n
p
y
'
)
"
]
;
y
t
r
a
i
n
=
t
e
m
p
[
[
1
;
;
s
i
z
e
T
r
a
i
n
]
]
;
C
l
e
a
r
A
l
l
[
t
e
m
p
]
;
t
e
m
p
=
E
x
t
e
r
n
a
l
E
v
a
l
u
a
t
e
[
p
y
,
"
i
m
p
o
r
t
n
u
m
p
y
a
s
n
p
n
p
.
l
o
a
d
(
r
'
C
:
\
\
U
s
e
r
s
\
\
c
h
a
n
d
\
\
D
e
s
k
t
o
p
\
\
W
o
l
f
r
a
m
W
i
n
t
e
r
S
c
h
o
o
l
\
\
D
a
t
a
s
e
t
\
\
y
_
t
e
s
t
.
n
p
y
'
)
"
]
;
y
t
e
s
t
=
t
e
m
p
[
[
1
;
;
s
i
z
e
T
e
s
t
]
]
;
C
l
e
a
r
A
l
l
[
t
e
m
p
]
;
Method-2: Through CSV format
Our data is first converted into a “.csv” format, and then brought on Mathematica. Similar to what we did previously, we will deal with only a small set of data-points.
Note:
The imported data is a 1D list, and thus ArrayReshape is necessary to convert it into the required arrays.
I
n
[
]
:
=
s
i
z
e
T
r
a
i
n
=
7
0
;
s
i
z
e
T
e
s
t
=
3
0
;
Input (Training and Testing):
I
n
[
]
:
=
x
T
r
a
i
n
0
=
I
m
p
o
r
t
[
"
C
:
\
\
U
s
e
r
s
\
\
c
h
a
n
d
\
\
D
e
s
k
t
o
p
\
\
W
o
l
f
r
a
m
W
i
n
t
e
r
S
c
h
o
o
l
\
\
D
a
t
a
s
e
t
\
\
S
h
o
w
e
r
_
d
a
t
a
(
s
m
a
l
l
)
_
c
s
v
\
\
x
T
e
s
t
0
S
m
a
l
l
.
c
s
v
"
,
"
D
a
t
a
"
]
;
x
T
r
a
i
n
0
=
A
r
r
a
y
R
e
s
h
a
p
e
[
x
T
r
a
i
n
0
,
{
s
i
z
e
T
r
a
i
n
,
9
,
9
,
8
0
,
1
}
]
;
x
T
r
a
i
n
1
=
I
m
p
o
r
t
[
"
C
:
\
\
U
s
e
r
s
\
\
c
h
a
n
d
\
\
D
e
s
k
t
o
p
\
\
W
o
l
f
r
a
m
W
i
n
t
e
r
S
c
h
o
o
l
\
\
D
a
t
a
s
e
t
\
\
S
h
o
w
e
r
_
d
a
t
a
(
s
m
a
l
l
)
_
c
s
v
\
\
x
T
r
a
i
n
1
S
m
a
l
l
.
c
s
v
"
,
"
D
a
t
a
"
]
;
x
T
r
a
i
n
1
=
A
r
r
a
y
R
e
s
h
a
p
e
[
x
T
r
a
i
n
1
,
{
s
i
z
e
T
r
a
i
n
,
9
,
9
,
1
}
]
;
x
T
e
s
t
0
=
I
m
p
o
r
t
[
"
C
:
\
\
U
s
e
r
s
\
\
c
h
a
n
d
\
\
D
e
s
k
t
o
p
\
\
W
o
l
f
r
a
m
W
i
n
t
e
r
S
c
h
o
o
l
\
\
D
a
t
a
s
e
t
\
\
S
h
o
w
e
r
_
d
a
t
a
(
s
m
a
l
l
)
_
c
s
v
\
\
x
T
e
s
t
0
S
m
a
l
l
.
c
s
v
"
,
"
D
a
t
a
"
]
;
x
T
e
s
t
0
=
A
r
r
a
y
R
e
s
h
a
p
e
[
x
T
e
s
t
0
,
{
s
i
z
e
T
e
s
t
,
9
,
9
,
8
0
,
1
}
]
;
x
T
e
s
t
1
=
I
m
p
o
r
t
[
"
C
:
\
\
U
s
e
r
s
\
\
c
h
a
n
d
\
\
D
e
s
k
t
o
p
\
\
W
o
l
f
r
a
m
W
i
n
t
e
r
S
c
h
o
o
l
\
\
D
a
t
a
s
e
t
\
\
S
h
o
w
e
r
_
d
a
t
a
(
s
m
a
l
l
)
_
c
s
v
\
\
x
T
e
s
t
1
S
m
a
l
l
.
c
s
v
"
,
"
D
a
t
a
"
]
;
x
T
e
s
t
1
=
A
r
r
a
y
R
e
s
h
a
p
e
[
x
T
e
s
t
1
,
{
s
i
z
e
T
e
s
t
,
9
,
9
,
1
}
]
;
Detector Coordinates
I
n
[
]
:
=
x
T
r
a
i
n
2
=
I
m
p
o
r
t
[
"
C
:
\
\
U
s
e
r
s
\
\
c
h
a
n
d
\
\
D
e
s
k
t
o
p
\
\
W
o
l
f
r
a
m
W
i
n
t
e
r
S
c
h
o
o
l
\
\
D
a
t
a
s
e
t
\
\
S
h
o
w
e
r
_
d
a
t
a
(
s
m
a
l
l
)
_
c
s
v
\
\
x
T
r
a
i
n
2
.
c
s
v
"
,
"
D
a
t
a
"
]
;
x
T
r
a
i
n
2
=
A
r
r
a
y
R
e
s
h
a
p
e
[
x
T
r
a
i
n
2
,
{
8
1
,
3
}
]
;
x
T
e
s
t
2
=
I
m
p
o
r
t
[
"
C
:
\
\
U
s
e
r
s
\
\
c
h
a
n
d
\
\
D
e
s
k
t
o
p
\
\
W
o
l
f
r
a
m
W
i
n
t
e
r
S
c
h
o
o
l
\
\
D
a
t
a
s
e
t
\
\
S
h
o
w
e
r
_
d
a
t
a
(
s
m
a
l
l
)
_
c
s
v
\
\
x
T
e
s
t
2
.
c
s
v
"
,
"
D
a
t
a
"
]
;
x
T
e
s
t
2
=
A
r
r
a
y
R
e
s
h
a
p
e
[
x
T
e
s
t
2
,
{
8
1
,
3
}
]
;
Output (Training and Testing):
I
n
[
]
:
=
y
T
r
a
i
n
=
I
m
p
o
r
t
[
"
C
:
\
\
U
s
e
r
s
\
\
c
h
a
n
d
\
\
D
e
s
k
t
o
p
\
\
W
o
l
f
r
a
m
W
i
n
t
e
r
S
c
h
o
o
l
\
\
D
a
t
a
s
e
t
\
\
S
h
o
w
e
r
_
d
a
t
a
(
s
m
a
l
l
)
_
c
s
v
\
\
y
T
r
a
i
n
S
m
a
l
l
.
c
s
v
"
,
"
D
a
t
a
"
]
;
y
T
e
s
t
=
I
m
p
o
r
t
[
"
C
:
\
\
U
s
e
r
s
\
\
c
h
a
n
d
\
\
D
e
s
k
t
o
p
\
\
W
o
l
f
r
a
m
W
i
n
t
e
r
S
c
h
o
o
l
\
\
D
a
t
a
s
e
t
\
\
S
h
o
w
e
r
_
d
a
t
a
(
s
m
a
l
l
)
_
c
s
v
\
\
y
T
e
s
t
S
m
a
l
l
.
c
s
v
"
,
"
D
a
t
a
"
]
;
Understanding the Data:
Since the data was given and not produced by us, it was crucial to understand it’s entries.
We know the following information:
◼
xTrain2 and xTest2
are the coordinates of detectors.
I
n
[
]
:
=
D
i
m
e
n
s
i
o
n
s
[
x
T
e
s
t
2
]
O
u
t
[
]
=
{
8
1
,
3
}
These are the entries of the array.
I
n
[
]
:
=
D
a
t
a
s
e
t
[
x
T
r
a
i
n
2
/
/
N
o
r
m
a
l
]
O
u
t
[
]
=
-
6
0
0
0
.
0
-
6
0
0
0
.
0
1
4
0
0
.
0
-
6
0
0
0
.
0
-
4
5
0
0
.
0
1
4
0
0
.
0
-
6
0
0
0
.
0
-
3
0
0
0
.
0
1
4
0
0
.
0
-
6
0
0
0
.
0
-
1
5
0
0
.
0
1
4
0
0
.
0
-
6
0
0
0
.
0
0
.
0
1
4
0
0
.
0
-
6
0
0
0
.
0
1
5
0
0
.
0
1
4
0
0
.
0
-
6
0
0
0
.
0
3
0
0
0
.
0
1
4
0
0
.
0
-
6
0
0
0
.
0
4
5
0
0
.
0
1
4
0
0
.
0
-
6
0
0
0
.
0
6
0
0
0
.
0
1
4
0
0
.
0
-
4
5
0
0
.
0
-
6
0
0
0
.
0
1
4
0
0
.
0
-
4
5
0
0
.
0
-
4
5
0
0
.
0
1
4
0
0
.
0
-
4
5
0
0
.
0
-
3
0
0
0
.
0
1
4
0
0
.
0
-
4
5
0
0
.
0
-
1
5
0
0
.
0
1
4
0
0
.
0
-
4
5
0
0
.
0
0
.
0
1
4
0
0
.
0
-
4
5
0
0
.
0
1
5
0
0
.
0
1
4
0
0
.
0
-
4
5
0
0
.
0
3
0
0
0
.
0
1
4
0
0
.
0
-
4
5
0
0
.
0
4
5
0
0
.
0
1
4
0
0
.
0
-
4
5
0
0
.
0
6
0
0
0
.
0
1
4
0
0
.
0
-
3
0
0
0
.
0
-
6
0
0
0
.
0
1
4
0
0
.
0
-
3
0
0
0
.
0
-
4
5
0
0
.
0
1
4
0
0
.
0
r
o
w
s
1
–
2
0
o
f
8
1
, and how they would look on a plane at the
z-coordinate:
1400 (m)
L
i
s
t
P
l
o
t
[
x
T
e
s
t
2
[
[
A
l
l
,
1
;
;
2
]
]
,
P
l
o
t
R
a
n
g
e
A
l
l
]
O
u
t
[
]
=
Note that the length scale used here is
Meters
.
We find that both the arrays are essentially equal.
I
n
[
]
:
=
x
T
r
a
i
n
2
=
=
x
T
e
s
t
2
O
u
t
[
]
=
T
r
u
e
Let’s also check the dimension to make sure we are on the right track!
◼
xTrain0 and xTest0
contains the signal bins distributed over 80 time bins, with each bin of size 25 ns, for a given detector and a particular event.
I
n
[
]
:
=
D
i
m
e
n
s
i
o
n
s
[
x
T
r
a
i
n
0
]
O
u
t
[
]
=
{
7
0
,
9
,
9
,
8
0
,
1
}
I
n
[
]
:
=
D
i
m
e
n
s
i
o
n
s
[
x
T
e
s
t
0
]
O
u
t
[
]
=
{
3
0
,
9
,
9
,
8
0
,
1
}
I
n
[
]
:
=
e
v
e
n
t
N
u
m
b
e
r
=
1
0
;
I
n
[
]
:
=
M
a
n
i
p
u
l
a
t
e
[
D
a
t
a
s
e
t
[
N
o
r
m
a
l
[
x
T
r
a
i
n
0
[
[
e
v
e
n
t
N
u
m
b
e
r
,
A
l
l
,
A
l
l
,
t
i
m
e
B
i
n
,
1
]
]
]
]
,
{
t
i
m
e
B
i
n
,
1
,
8
0
,
1
}
,
S
a
v
e
D
e
f
i
n
i
t
i
o
n
s
T
r
u
e
]
O
u
t
[
]
=
t
i
m
e
B
i
n
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
1
.
0
7
7
2
8
3
8
5
9
2
5
2
9
2
9
6
8
8
2
.
0
9
5
0
5
6
7
7
2
2
3
2
0
5
5
6
6
4
2
.
6
1
7
8
4
8
1
5
7
8
8
2
6
9
0
4
3
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
0
.
0
6
4
2
2
2
1
7
1
9
0
2
6
5
6
5
5
5
1
8
3
.
1
9
1
6
0
4
6
1
4
2
5
7
8
1
2
5
1
3
.
2
8
0
0
3
4
0
6
5
2
4
6
5
8
2
0
3
5
1
.
9
7
3
7
3
1
9
9
4
6
2
8
9
0
6
2
5
5
.
4
1
8
8
0
6
5
5
2
8
8
6
9
6
2
8
9
1
0
.
2
8
7
3
5
6
1
6
8
0
3
1
6
9
2
5
0
4
9
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
1
.
1
4
6
1
0
7
4
3
5
2
2
6
4
4
0
4
3
4
.
0
2
2
3
9
4
1
8
0
2
9
7
8
5
1
5
6
2
7
.
5
1
6
3
6
2
1
9
0
2
4
6
5
8
2
0
3
1
2
.
4
4
9
7
7
0
4
5
0
5
9
2
0
4
1
0
1
6
1
.
0
8
8
3
3
9
6
8
6
3
9
3
7
3
7
7
9
3
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
Signals at a detector adjacent to the central detector.
I
n
[
]
:
=
L
i
s
t
L
i
n
e
P
l
o
t
[
F
l
a
t
t
e
n
[
T
a
b
l
e
[
x
T
r
a
i
n
0
[
[
e
v
e
n
t
N
u
m
b
e
r
,
4
,
5
,
n
]
]
,
{
n
,
1
,
8
0
}
]
]
,
A
x
e
s
L
a
b
e
l
{
"
t
i
m
e
"
,
"
s
i
g
n
a
l
"
}
,
P
l
o
t
R
a
n
g
e
A
l
l
,
P
l
o
t
L
a
b
e
l
"
S
i
g
n
a
l
s
o
n
D
e
t
e
c
t
o
r
:
(
3
,
5
)
"
]
O
u
t
[
]
=
Comparing the above values with the readings obtained on the central detector.
I
n
[
]
:
=
L
i
s
t
L
i
n
e
P
l
o
t
[
{
F
l
a
t
t
e
n
[
T
a
b
l
e
[
x
T
r
a
i
n
0
[
[
e
v
e
n
t
N
u
m
b
e
r
,
5
,
5
,
n
]
]
,
{
n
,
1
,
8
0
}
]
]
,
F
l
a
t
t
e
n
[
T
a
b
l
e
[
x
T
r
a
i
n
0
[
[
e
v
e
n
t
N
u
m
b
e
r
,
3
,
5
,
n
]
]
,
{
n
,
1
,
8
0
}
]
]
}
,
A
x
e
s
L
a
b
e
l
{
"
t
i
m
e
"
,
"
s
i
g
n
a
l
"
}
,
P
l
o
t
R
a
n
g
e
A
l
l
,
P
l
o
t
L
e
g
e
n
d
s
{
"
D
e
t
e
c
t
o
r
:
(
5
,
5
)
"
,
"
D
e
t
e
c
t
o
r
:
(
3
,
5
)
"
}
]
O
u
t
[
]
=
D
e
t
e
c
t
o
r
:
(
5
,
5
)
D
e
t
e
c
t
o
r
:
(
3
,
5
)
The signal strength in the other detector is subdued by that from the central detector(where our cosmic shower was centered).
◼
xTrain1 and xTest0
contains the knowledge of first modified arrival time
I
n
[
]
:
=
D
i
m
e
n
s
i
o
n
s
[
x
T
r
a
i
n
1
]
O
u
t
[
]
=
{
7
0
,
9
,
9
,
1
}
I
n
[
]
:
=
D
i
m
e
n
s
i
o
n
s
[
x
T
e
s
t
1
]
O
u
t
[
]
=
{
3
0
,
9
,
9
,
1
}
I
n
[
]
:
=
T
o
t
a
l
[
T
a
b
l
e
[
R
a
m
p
[
x
T
r
a
i
n
0
[
[
e
v
e
n
t
N
u
m
b
e
r
,
A
l
l
,
A
l
l
,
i
]
]
]
,
{
i
,
1
,
8
0
}
]
,
1
]
;
I
n
[
]
:
=
M
a
n
i
p
u
l
a
t
e
[
D
a
t
a
s
e
t
[
x
T
e
s
t
1
[
[
e
v
e
n
t
,
A
l
l
,
A
l
l
,
1
]
]
]
,
{
e
v
e
n
t
,
1
,
3
0
,
1
}
,
S
a
v
e
D
e
f
i
n
i
t
i
o
n
s
T
r
u
e
]
O
u
t
[
]
=
e
v
e
n
t
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
0
.
0
0
0
0
1
2
2
2
7
1
1
2
1
6
5
4
1
9
3
8
4
8
4
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
0
.
0
0
0
0
0
7
7
6
9
7
9
1
4
0
9
3
7
3
2
8
3
3
8
6
0
.
0
0
0
0
0
7
3
6
3
5
1
8
7
6
1
8
4
9
2
6
3
6
8
3
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
-
1
.
0
0
.
0
0
0
0
0
3
6
9
6
7
9
1
7
3
6
7
2
9
2
1
0
2
4
3
0
.
0
0
0
0
0
3
1
5
6
2
0
7
8
6
7
5
0
5
1
4
5
2
5
9
-
1
.
0
-
1
.
0