WolframAlpha.com
WolframCloud.com
All Sites & Public Resources...
Products & Services
Wolfram|One
Mathematica
Wolfram|Alpha Notebook Edition
Finance Platform
System Modeler
Wolfram Player
Wolfram Engine
WolframScript
Enterprise Private Cloud
Application Server
Enterprise Mathematica
Wolfram|Alpha Appliance
Enterprise Solutions
Corporate Consulting
Technical Consulting
Wolfram|Alpha Business Solutions
Resource System
Data Repository
Neural Net Repository
Function Repository
Wolfram|Alpha
Wolfram|Alpha Pro
Problem Generator
API
Data Drop
Products for Education
Mobile Apps
Wolfram Player
Wolfram Cloud App
Wolfram|Alpha for Mobile
Wolfram|Alpha-Powered Apps
Services
Paid Project Support
Wolfram U
Summer Programs
All Products & Services »
Technologies
Wolfram Language
Revolutionary knowledge-based programming language.
Wolfram Cloud
Central infrastructure for Wolfram's cloud products & services.
Wolfram Science
Technology-enabling science of the computational universe.
Wolfram Notebooks
The preeminent environment for any technical workflows.
Wolfram Engine
Software engine implementing the Wolfram Language.
Wolfram Natural Language Understanding System
Knowledge-based broadly deployed natural language.
Wolfram Data Framework
Semantic framework for real-world data.
Wolfram Universal Deployment System
Instant deployment across cloud, desktop, mobile, and more.
Wolfram Knowledgebase
Curated computable knowledge powering Wolfram|Alpha.
All Technologies »
Solutions
Engineering, R&D
Aerospace & Defense
Chemical Engineering
Control Systems
Electrical Engineering
Image Processing
Industrial Engineering
Mechanical Engineering
Operations Research
More...
Finance, Statistics & Business Analysis
Actuarial Sciences
Bioinformatics
Data Science
Econometrics
Financial Risk Management
Statistics
More...
Education
All Solutions for Education
Trends
Machine Learning
Multiparadigm Data Science
Internet of Things
High-Performance Computing
Hackathons
Software & Web
Software Development
Authoring & Publishing
Interface Development
Web Development
Sciences
Astronomy
Biology
Chemistry
More...
All Solutions »
Learning & Support
Learning
Wolfram Language Documentation
Fast Introduction for Programmers
Wolfram U
Videos & Screencasts
Wolfram Language Introductory Book
Webinars & Training
Summer Programs
Books
Need Help?
Support FAQ
Wolfram Community
Contact Support
Premium Support
Paid Project Support
Technical Consulting
All Learning & Support »
Company
About
Company Background
Wolfram Blog
Events
Contact Us
Work with Us
Careers at Wolfram
Internships
Other Wolfram Language Jobs
Initiatives
Wolfram Foundation
MathWorld
Computer-Based Math
A New Kind of Science
Wolfram Technology for Hackathons
Student Ambassador Program
Wolfram for Startups
Demonstrations Project
Wolfram Innovator Awards
Wolfram + Raspberry Pi
Summer Programs
More...
All Company »
Search
WOLFRAM COMMUNITY
Connect with users of Wolfram technologies to learn, solve problems and share ideas
Join
Sign In
Dashboard
Groups
People
Message Boards
Answer
(
Unmark
)
Mark as an Answer
GROUPS:
Staff Picks
Wolfram Science
Physics
Graphics and Visualization
Graphs and Networks
Wolfram Language
Wolfram Summer School
Wolfram Function Repository
Wolfram Fundamental Physics Project
4
Utkarsh Bajaj
[WSS22] Compiling different models of computation into hypergraph rewriting
Utkarsh Bajaj, DPS International
Posted
20 days ago
278 Views
|
1 Reply
|
4 Total Likes
Follow this post
|
Compiling different models of computation into hypergraph rewriting systems
by
Utkarsh Bajaj
University of Waterloo
By computational irreducibility, it is well known that Hypergraph rewriting systems are Turing complete i.e. they can compute any Turing-computable function. The goal of this project is to interpret a model of computation (such as a Turing Machine with imperative algorithms or functional lambda calculus) as a hypergraph rewriting system. For example, just as we can interpret a DFA (deterministic finite automaton) using a Turing machine with a set of states, a tape alphabet, and transition functions, we can interpret a DFA using a Hypergraph rewriting system with an initial Hypergraph and replacement rule. The first goal of this project is make a function that takes in a finite state machine and outputs the “equivalent” Hypergraph rewrite rule with an initial hypergraph (the input). The second goal of this project is to extend the same to imperative machines like Turing machines and register machines.The third goal of this project is to get the intuition for how we might formally define the notion of computability on Hypergraph rewriting systems, by creating a compiler from a high level Turing-complete functional language (called “Faux Racket”), which is defined below.
Section 0: Definitions, speculations, and some preliminary results
To get started, let’s formally define hypergraph rewriting systems as a computational system, as is used in the Wolfram physics project literature [1].
◼
Definition
(Wolfram Model Hypergraph). A wolfram model hypergraph/hypergraph, denoted
H
=
(
V
,
E
)
, is a finite collection of hyperedges (ordered):
E
⊂
(
V
)
\
∅
.
◼
Definition
(Update rule). An update rule, denoted
R
, for a spatial hypergraph
H
=
(
V
,
E
)
is an abstract rewriting rule of the form
H
1
-
>
H
2
in which a subhypergraph matching pattern
H
1
is replaced by a
distinct
subhypergraph matching pattern
H
2
.
◼
Note that the word distinct is important here. That’s the reason why the multiway evolution graph is acyclic for a wolfram model system.
◼
Definition
(HRS). A
hypergraph rewriting system
H
R
S
is a set
(
H
,
-
>
)
where
H
is a set of hypergraphs
-
>
is non-symmetric binary relation defined using the update rule.
◼
Note: the set
H
will not be finite in most cases.
◼
D
e
f
i
n
i
t
i
o
n
(
M
u
l
t
i
w
a
y
E
v
o
l
u
t
i
o
n
G
r
a
p
h
)
.
A
m
u
l
t
i
w
a
y
e
v
o
l
u
t
i
o
n
g
r
a
p
h
o
f
a
n
H
R
S
,
d
e
n
o
t
e
d
G
m
u
l
t
i
w
a
y
i
s
a
d
i
r
e
c
t
e
d
a
c
y
c
l
i
c
g
r
a
p
h
(
r
e
p
r
e
s
e
n
t
s
t
h
e
e
v
o
l
u
t
i
o
n
a
g
e
n
e
r
a
l
l
y
n
o
n
-
c
o
n
f
l
u
e
n
t
a
b
s
t
r
a
c
t
r
e
w
r
i
t
i
n
g
s
y
s
t
e
m
(
A
,
-
>
)
)
i
n
w
h
i
c
h
e
a
c
h
v
e
r
t
e
x
c
o
r
r
e
s
p
o
n
d
s
t
o
a
h
y
p
e
r
g
r
a
p
h
a
n
d
i
n
w
h
i
c
h
t
h
e
d
i
r
e
c
t
e
d
e
d
g
e
a
-
>
b
e
x
i
s
t
s
i
f
f
t
h
e
r
e
i
s
a
r
e
w
r
i
t
e
r
u
l
e
a
p
p
l
i
c
a
t
i
o
n
t
r
a
n
s
f
o
r
m
i
n
g
a
i
n
t
o
b
,
w
h
e
r
e
a
,
b
∈
A
.
◼
Definition
(Path). There is a
path
from a vertex
s
to vertex
e
if
s
*
-
>
e
. There is a
maximal path
between
s
and
e
if
s
*
-
>
e
and
(
∀
h
∈
H
)
¬
(
e
-
>
h
)
. We denote this by
s
e
.
◼
It is obvious if there is a maximal path in the multiway system for a particular vertex, then there is a maximal path in that multiway system with
G
as the starting vertex (we can merely trace back).
◼
There is a subtlety here which is obvious mathematically but not obvious for our intuition: a HRS is
(
H
,
-
>
)
where
H
is a set of hypergraphs. We define the binary relation between hypergraphs by seeing that if a single application of an update rule can transform one hypergraph into another hypergraph. However, we are only considering hypergraphs in the set
H
. So if there is a rule that transforms one hypergraph into another hypergraph which is not in the set
H
, they cannot be related because the binary relation is only
defined
on the set
H
. So we should think of this as just a set with a binary relation on it, not really update rules: update rules
generate
the binary relation.
◼
Definition
(Halting set). Let
(
H
,
-
>
)
be an HRS. Then
G
∈
H
is in the halting set of an HRS
(
H
,
-
>
)
, denoted
Ω
(
H
)
, iff there exists a maximal path in the multiway evolution graph for
G
⧦
G
G
'
for some
G
'
∈
G
. If
G
∈
H
is not in the halting set, then it means that every path in the multiway system is non-maximal i.e. it doesn’t stop. We say that
G
↑
in this case.
◼
Note that maximal paths are defined in the multiway system. If a hypergraph is not in an halting set, then it means that
every path
through the multiway system doesn’t stop running. So yes, there is no maximal path in the multiway system. Because if there is, then the initial hypergraph will be in the halting set.
To understand how hypergraph rewriting works in the Wolfram language, refer to [2]. Now, we might come up with a intuitive definition for what it means for a HRS to
simulate
a Turing machine. First, let’s define what it means by saying “the tape at some time
t
):
◼
Definition
(Tape at time
t
∈
). Let
T
=
(
Q
,
Γ
,
δ
,
q
h
a
l
t
,
q
0
)
be a Turing machine where
Q
is a finite set of internal states,
Γ
is the finite tape alphabet,
q
h
a
l
t
is the halting state, and
q
0
is the initial state. Let
w
∈
*
Γ
be the initial tape (where to the left and right we have 0s). The tape at time
t
, denoted
T
(
t
)
, is defined inductively.
T
(
1
)
=
w
. Then
T
(
t
+
1
)
=
δ
(
T
(
t
)
)
where
t
>
=
1
where
δ
(
w
)
for any word
w
is the new word produced on the tape after one application of the transition function.
Now, the definition of a HRS simulating a Turing Machine.
◼
Definition
(Hypergraph rewriting system simulates a Turing machine). Let
T
=
(
Q
,
Γ
,
δ
,
q
h
a
l
t
,
q
0
)
be a Turing machine where
Q
is a finite set of internal states,
Γ
is the finite tape alphabet,
q
h
a
l
t
is the halting state, and
q
0
is the initial state. Let
(
H
,
-
>
)
be a hypergraph rewriting system. Then
(
H
,
-
>
)
simulates
T
if
∃
f
,
f
:
*
Γ
-
>
H
such that
1
.
∀
w
∈
Ω
(
T
)
,
f
(
w
)
G
'
and
G
'
∼
f
(
w
'
)
, where
w
'
=
T
(
w
)
↓
.
2
.
∀
w
∈
*
Γ
\
Ω
(
T
)
,
(
∀
N
∈
)
(
∃
n
>
=
N
)
f
(
w
)
-
>
n
G
n
, where
G
n
∼
f
(
T
(
n
)
)
.
I
t
s
h
o
u
l
d
b
e
t
h
e
c
a
s
e
t
h
a
t
i
f
t
h
e
H
R
S
(
H
,
-
>
)
c
a
n
s
i
m
u
l
a
t
e
a
n
y
T
u
r
i
n
g
m
a
c
h
i
n
e
T
,
t
h
e
n
i
t
s
h
o
u
l
d
b
e
a
b
l
e
t
o
c
o
m
p
u
t
e
a
n
y
p
a
r
t
i
a
l
r
e
c
u
r
s
i
v
e
f
u
n
c
t
i
o
n
u
n
d
e
r
a
s
u
i
t
a
b
l
e
d
e
f
i
n
i
t
i
o
n
o
f
c
o
m
p
u
t
a
b
i
l
i
t
y
.
D
e
f
i
n
i
n
g
a
p
r
e
c
i
s
e
n
o
t
i
o
n
o
f
c
o
m
p
u
t
a
b
i
l
i
t
y
f
o
r
h
y
p
e
r
g
r
a
p
h
s
t
h
a
t
i
s
a
b
l
e
t
o
l
e
v
e
r
a
g
e
i
t
s
f
l
e
x
i
b
i
l
i
t
y
a
n
d
i
s
m
o
s
t
“
s
u
i
t
e
d
”
f
o
r
h
y
p
e
r
g
r
a
p
h
s
i
s
o
n
e
o
f
t
h
e
f
u
t
u
r
e
g
o
a
l
s
o
f
t
h
e
p
r
o
j
e
c
t
.
H
o
w
e
v
e
r
,
a
t
e
n
t
a
t
i
v
e
d
e
f
i
n
i
t
i
o
n
h
a
s
b
e
e
n
p
r
o
d
u
c
e
d
b
e
l
o
w
(
w
e
r
e
p
r
e
s
e
n
t
n
∈
a
s
t
h
(
n
+
1
)
c
y
c
l
i
c
g
r
a
p
h
)
.
◼
Definition
(One parameter family of maps
χ
t
)
We first define a few graphs like these.
I
n
[
]
:
=
c
y
c
l
i
c
g
r
a
p
h
[
n
_
]
:
=
I
f
[
n
0
,
{
{
0
,
0
}
}
,
J
o
i
n
[
T
a
b
l
e
[
{
i
,
i
+
1
}
,
{
i
,
0
,
n
-
1
}
]
,
{
{
n
,
0
}
}
]
]
I
n
[
]
:
=
c
y
c
l
i
c
g
r
a
p
h
[
n
_
,
s
t
a
r
t
_
]
:
=
I
f
[
n
0
,
{
{
s
t
a
r
t
,
s
t
a
r
t
}
}
,
J
o
i
n
[
T
a
b
l
e
[
{
i
+
s
t
a
r
t
,
i
+
1
+
s
t
a
r
t
}
,
{
i
,
0
,
n
-
1
}
]
,
{
{
n
+
s
t
a
r
t
,
s
t
a
r
t
}
}
]
]
I
n
[
]
:
=
c
o
n
v
e
r
t
T
o
S
t
r
i
n
g
R
u
l
e
[
r
u
l
e
_
]
:
=
r
u
l
e
/
.
(
T
a
b
l
e
[
i
"
x
"
<
>
T
o
S
t
r
i
n
g
[
i
]
,
{
i
,
M
i
n
[
#
1
]
,
M
a
x
[
#
1
]
}
]
&
)
[
J
o
i
n
[
r
u
l
e
〚
1
〛
,
r
u
l
e
〚
2
〛
]
]
I
n
[
]
:
=
χ
1
[
n
_
]
:
=
c
y
c
l
i
c
g
r
a
p
h
[
n
,
5
]
(
*
C
Y
C
L
I
C
G
R
A
P
H
S
*
)
I
n
[
]
:
=
χ
2
[
n
_
]
:
=
J
o
i
n
[
c
y
c
l
i
c
g
r
a
p
h
[
n
,
4
]
,
s
t
a
t
e
G
r
a
p
h
[
0
]
,
{
{
3
,
4
}
}
]
(
*
C
Y
C
L
I
C
G
R
A
P
H
S
W
I
T
H
"
S
T
A
T
E
"
A
T
T
A
C
H
E
D
*
)
I
n
[
]
:
=
χ
3
[
n
_
,
k
_
]
:
=
I
f
k
=
=
1
,
c
y
c
l
i
c
g
r
a
p
h
[
n
]
,
J
o
i
n
c
y
c
l
i
c
g
r
a
p
h
[
n
]
,
F
l
o
o
r
n
2
,
n
+
1
,
n
+
1
+
#
&
[
χ
3
[
n
,
k
-
1
]
]
(
*
C
Y
C
L
I
C
G
R
A
P
H
S
S
T
R
I
N
G
E
D
T
O
G
E
T
H
E
R
*
)
χ
4
[
n
_
,
k
_
]
:
=
I
f
k
=
=
1
,
χ
2
[
n
]
,
J
o
i
n
J
o
i
n
c
y
c
l
i
c
g
r
a
p
h
[
n
]
,
F
l
o
o
r
n
2
,
n
+
1
,
n
+
1
+
#
&
[
χ
3
[
n
,
k
-
1
]
]
+
4
,
s
t
a
t
e
G
r
a
p
h
[
0
]
,
{
{
3
,
4
}
}
(
*
C
Y
C
L
I
C
G
R
A
P
S
H
S
T
R
I
N
G
E
D
T
O
G
E
T
H
E
R
W
I
T
H
S
T
A
T
E
A
T
T
A
C
H
E
D
T
O
F
I
R
S
T
O
N
E
(
I
N
O
N
E
E
N
D
)
*
)
I
n
[
]
:
=
χ
[
2
,
{
a
_
,
b
_
}
,
s
t
a
t
e
_
]
:
=
J
o
i
n
[
c
y
c
l
i
c
g
r
a
p
h
[
a
]
,
{
{
F
l
o
o
r
[
a
/
2
]
,
a
+
1
}
}
,
c
y
c
l
i
c
g
r
a
p
h
[
b
,
a
+
1
]
,
J
o
i
n
S
t
a
t
e
[
a
+
b
+
1
,
F
l
o
o
r
[
a
/
2
]
,
s
t
a
t
e
]
]
I
n
[
]
:
=
R
e
s
o
u
r
c
e
F
u
n
c
t
i
o
n
[
"
W
o
l
f
r
a
m
M
o
d
e
l
P
l
o
t
"
]
[
χ
2
[
1
0
]
,
V
e
r
t
e
x
L
a
b
e
l
s
-
>
A
u
t
o
m
a
t
i
c
]
O
u
t
[
]
=
I
n
[
]
:
=
R
e
s
o
u
r
c
e
F
u
n
c
t
i
o
n
[
"
W
o
l
f
r
a
m
M
o
d
e
l
P
l
o
t
"
]
[
χ
3
[
1
0
,
3
]
,
V
e
r
t
e
x
L
a
b
e
l
s
-
>
A
u
t
o
m
a
t
i
c
]
O
u
t
[
]
=
I
n
[
]
:
=
χ
[
2
,
{
a
_
,
b
_
}
,
s
t
a
t
e
_
]
:
=
J
o
i
n
[
c
y
c
l
i
c
g
r
a
p
h
[
a
]
,
{
{
F
l
o
o
r
[
a
/
2
]
,
a
+
1
}
}
,
c
y
c
l
i
c
g
r
a
p
h
[
b
,
a
+
1
]
,
J
o
i
n
S
t
a
t
e
[
a
+
b
+
1
,
F
l
o
o
r
[
a
/
2
]
,
s
t
a
t
e
]
]
χ
2
[
2
,
{
2
,
3
}
,
2
]
I
n
[
]
:
=
R
e
s
o
u
r
c
e
F
u
n
c
t
i
o
n
[
"
W
o
l
f
r
a
m
M
o
d
e
l
P
l
o
t
"
]
[
χ
[
2
,
{
2
,
3
}
,
2
]
,
V
e
r
t
e
x
L
a
b
e
l
s
-
>
A
u
t
o
m
a
t
i
c
]
O
u
t
[
]
=
◼
D
e
f
i
n
i
t
i
o
n
(
P
a
r
t
i
a
l
c
o
m
p
u
t
a
b
l
e
)
.
A
p
a
r
t
i
a
l
f
u
n
c
t
i
o
n
f
:
k
-
>
i
s
s
a
i
d
t
o
p
a
r
t
i
a
l
c
o
m
p
u
t
a
b
l
e
i
f
t
h
e
r
e
e
x
i
s
t
s
a
n
H
R
S
(
H
,
-
>
)
(
w
h
e
r
e
H
i
s
a
s
e
t
o
f
h
y
p
e
r
g
r
a
p
h
s
a
n
d
-
>
i
s
a
b
i
n
a
r
y
r
e
l
a
t
i
o
n
o
n
H
)
a
n
d
o
n
e
m
a
p
χ
:
k
-
>
H
o
u
t
o
f
t
h
e
o
n
e
-
p
a
r
a
m
e
t
e
r
f
a
m
i
l
y
o
f
m
a
p
s
d
e
f
i
n
e
d
a
b
o
v
e
s
u
c
h
t
h
a
t
∀
n
∈
f
o
r
w
h
i
c
h
f
(
n
)
i
s
d
e
f
i
n
e
d
,
χ
(
n
)
∈
Ω
(
H
)
a
n
d
χ
(
n
)
G
,
w
h
e
r
e
G
i
s
i
s
o
m
o
r
p
h
i
c
t
o
t
h
f
(
n
)
c
y
c
l
i
c
g
r
a
p
h
.
A
l
s
o
,
f
o
r
e
v
e
r
y
n
o
u
t
s
i
d
e
t
h
e
d
o
m
a
i
n
o
f
f
,
χ
(
n
)
.
Now that we have a notion of computability, we can prove a few theorems/lemmas.
◼
Lemma.
The successor function
n
↦
n
+
1
is partial computable
◼
Proof. Let
χ
be
χ
2
. Then, we can show that for all
n
,
χ
2
(
n
)
C
n
+
1
under the rule successor:
I
n
[
]
:
=
s
u
c
c
e
s
s
o
r
=
{
{
x
1
,
x
2
}
,
{
x
2
,
x
3
}
,
{
x
3
,
x
}
,
{
x
3
,
x
3
}
,
{
x
2
,
x
2
}
,
{
x
1
,
x
1
}
,
{
x
1
,
x
4
}
,
{
x
4
,
x
5
}
}
<
-
>
{
{
x
4
,
x
6
}
,
{
x
6
,
x
5
}
}
;
I
n
[
]
:
=
F
i
n
d
W
o
l
f
r
a
m
M
o
d
e
l
P
r
o
o
f
[
χ
2
[
2
]
<
-
>
c
y
c
l
i
c
g
r
a
p
h
[
3
]
,
s
u
c
c
e
s
s
o
r
]
O
u
t
[
]
=
P
r
o
o
f
O
b
j
e
c
t
L
o
g
i
c
:
E
q
u
a
t
i
o
n
a
l
L
o
g
i
c
S
t
e
p
s
:
6
3
T
h
e
o
r
e
m
:
(
4
⊕
5
)
⊙
(
(
5
⊕
6
)
⊙
(
(
6
⊕
4
)
⊙
(
(
1
⊕
1
)
⊙
(
(
2
⊕
2
)
⊙
(
(
3
⊕
3
)
⊙
(
(
1
⊕
2
)
⊙
(
1
)
)
)
)
)
)
)
(
0
⊕
1
)
⊙
(
1
)
where the FindWolframModelProof function is defined in the Appendix, taken directly from [3]. We can visualise the evolution:
I
n
[
]
:
=
R
e
s
o
u
r
c
e
F
u
n
c
t
i
o
n
[
"
W
o
l
f
r
a
m
M
o
d
e
l
"
]
[
{
{
x
1
,
x
2
}
,
{
x
2
,
x
3
}
,
{
x
3
,
x
}
,
{
x
3
,
x
3
}
,
{
x
2
,
x
2
}
,
{
x
1
,
x
1
}
,
{
x
1
,
x
4
}
,
{
x
4
,
x
5
}
}
-
>
{
{
x
4
,
x
6
}
,
{
x
6
,
x
5
}
}
,
χ
2
[
2
]
,
"
E
v
e
n
t
s
S
t
a
t
e
s
P
l
o
t
s
L
i
s
t
"
]
O
u
t
[
]
=
,
◼
Lemma.
The identity function
n
↦
n
is computable
◼
Proof. Let
χ
be
χ
1
. The rule is the empty rule.
I
n
[
]
:
=
i
d
e
n
t
i
t
y
=
{
}
;
◼
Lemma
. The function
n
↦
0
is computable.
◼
Proof. Let
χ
by
χ
1
. The rule is constant 0, given below:
I
n
[
]
:
=
c
o
n
s
t
a
n
t
0
=
c
o
n
v
e
r
t
T
o
S
t
r
i
n
g
R
u
l
e
[
{
{
1
,
2
}
,
{
2
,
3
}
}
<
-
>
{
{
1
,
3
}
}
]
;
I
n
[
]
:
=
F
i
n
d
W
o
l
f
r
a
m
M
o
d
e
l
P
r
o
o
f
[
χ
1
[
1
0
]
<
-
>
c
y
c
l
i
c
g
r
a
p
h
[
0
]
,
c
o
n
s
t
a
n
t
0
]
O
u
t
[
]
=
P
r
o
o
f
O
b
j
e
c
t
L
o
g
i
c
:
E
q
u
a
t
i
o
n
a
l
L
o
g
i
c
S
t
e
p
s
:
1
7
T
h
e
o
r
e
m
:
(
5
⊕
6
)
⊙
(
(
6
⊕
7
)
⊙
(
(
7
⊕
8
)
⊙
(
(
8
⊕
9
)
⊙
(
1
)
)
)
)
0
⊕
0
◼
Lemma
. The function
n
↦
k
n
where
k
∈
is computable.
◼
Proof. Let
χ
by
χ
3
(with the second argument being
k
. Note that we can enumerate all of these). The rule is:
I
n
[
]
:
=
m
u
l
t
i
p
l
y
b
y
c
o
n
s
t
a
n
t
=
c
o
n
v
e
r
t
T
o
S
t
r
i
n
g
R
u
l
e
/
@
{
{
{
1
,
2
}
,
{
3
,
2
}
,
{
3
,
4
}
}
<
-
>
{
{
1
,
4
}
,
{
3
,
2
}
,
{
5
,
6
}
,
{
6
,
7
}
,
{
7
,
5
}
,
{
5
,
5
}
,
{
6
,
6
}
,
{
7
,
7
}
,
{
5
,
1
}
}
,
{
{
1
,
2
}
,
{
2
,
3
}
,
{
3
,
1
}
,
{
1
,
1
}
,
{
2
,
2
}
,
{
3
,
3
}
,
{
3
,
4
}
,
{
4
,
5
}
,
{
5
,
6
}
}
<
-
>
{
{
4
,
6
}
}
}
;
I
n
[
]
:
=
F
i
n
d
W
o
l
f
r
a
m
M
o
d
e
l
P
r
o
o
f
[
χ
3
[
1
0
,
3
]
<
-
>
c
y
c
l
i
c
g
r
a
p
h
[
3
0
]
,
m
u
l
t
i
p
l
y
b
y
c
o
n
s
t
a
n
t
]
O
u
t
[
]
=
P
r
o
o
f
O
b
j
e
c
t
L
o
g
i
c
:
E
q
u
a
t
i
o
n
a
l
L
o
g
i
c
S
t
e
p
s
:
2
4
0
T
h
e
o
r
e
m
:
(
0
⊕
1
)
⊙
(
(
1
⊕
2
)
⊙
(
(
2
⊕
3
)
⊙
(
(
3
⊕
4
)
⊙
(
(
4
⊕
5
)
⊙
(
(
5
⊕
6
)
⊙
(
(
6
⊕
7
)
⊙
(
1
)
)
)
)
)
)
)
(
0
⊕
1
)
⊙
(
1
)
D
a
t
a
n
o
t
i
n
n
o
t
e
b
o
o
k
.
S
t
o
r
e
n
o
w
◼
Lemma.
The function
(
n
1
,
n
2
)
↦
n
1
+
n
2
is computable.
◼
Proof. let’s choose
χ
[
2
,
{
n
1
,
n
2
}
,
0
]
as the initial graph. The rule is as follows:
I
n
[
]
:
=
a
d
d
2
[
s
t
a
t
e
_
]
:
=
c
o
n
v
e
r
t
T
o
S
t
r
i
n
g
R
u
l
e
/
@
{
J
o
i
n
[
{
{
2
,
1
}
,
{
2
,
3
}
,
{
4
,
3
}
}
,
J
o
i
n
S
t
a
t
e
[
4
,
2
,
s
t
a
t
e
]
]
-
>
J
o
i
n
[
{
{
4
,
1
}
,
{
2
,
3
}
}
,
J
o
i
n
S
t
a
t
e
[
4
,
2
,
s
t
a
t
e
+
1
]
]
,
J
o
i
n
[
{
{
1
,
2
}
,
{
2
,
3
}
}
,
J
o
i
n
S
t
a
t
e
[
3
,
1
,
s
t
a
t
e
+
1
]
]
-
>
{
{
1
,
3
}
}
}
I
n
[
]
:
=
R
e
s
o
u
r
c
e
F
u
n
c
t
i
o
n
[
"
W
o
l
f
r
a
m
M
o
d
e
l
"
]
[
a
d
d
2
[
0
]
,
χ
[
2
,
{
4
,
5
}
,
0
]
,
3
,
"
E
v
e
n
t
s
S
t
a
t
e
s
P
l
o
t
s
L
i
s
t
"
]
O
u
t
[
]
=
,
,
These results provide some intuition on how we might define computability on hypergraph rewriting systems. However, I think that there are better ways of defining computability. Proving that computability on hypergraph rewriting systems is closed under function composition, primitive recursion, and minimalization is non-trivial. That’s why I decided to compile different existing models of computation into hypergraph rewriting systems and see which ones are the most “natural” to hypergraph rewriting systems because it is difficult,
a priori
, to come up with a definition of computability of partial functions that leverage the flexibility of hypergraph rewriting systems. So, I’ve decided to explicitly convert Finite state machines, single-tape Turing machines, register machines, and “Faux racket” into hypergraph rewriting systems. The conclusion is that that hypergraph rewriting systems, although Turing-complete like other models, are “powerful”/efficient enough to compile a functional language like Faux racket with only very minimal set of replacement rules. On a surface level, it has been realised that hypergraph rewriting systems are more efficient than a single-tape Turing machines for the same reason that multi-tape Turing machines are more efficient that single tape Turing machines. More precisely, the notion of concurrent programming is very natural to an HRS. For example, as opposed to how an ordinary compiler might create stack frames to perform computations sequentially, in hypergraph rewriting systems, we can a “new stack” itself by creating a disconnected hypergraph, and performing computations there. In this way, we can increase the “dimension” of an HRS to support concurrent programming whenever possible. These ideas are elaborated upon in the last section on faux racket to hypergraph conversion. It will be apparent to the reader that when we will simulate Turing machines and Register machines using hypergraph rewriting, we only use a very specific class of hypergraphs and rewrite rules.
Section 1: Converting FSMs into Hypergraph rewriting
Subsection 1. DFAs
Given a DFA
(
Q
,
Σ
,
δ
,
q
0
,
F
)
, we need to convert it into the associated hypergraph rewriting system i.e. an initial hypergraph and a collection of rules.
It is quite trivial to convert DFAs to hypergraph rewriting. A DFA consists of a number of states and an alphabet. So we begin by defining simple graphs for symbols in an alphabet
Σ
:
I
n
[
]
:
=
a
l
p
h
a
b
e
t
G
r
a
p
h
H
e
l
p
e
r
[
n
_
]
:
=
I
f
[
n
=
=
0
,
{
{
1
,
1
}
,
{
1
,
2
}
}
,
I
f
[
n
=
=
1
,
{
{
1
,
2
}
,
{
2
,
1
}
}
,
M
o
d
u
l
e
[
{
x
}
,
x
=
L
a
s
t
[
#
]
[
[
1
]
]
&
[
a
l
p
h
a
b
e
t
G
r
a
p
h
H
e
l
p
e
r
[
n
-
1
]
]
;
J
o
i
n
[
D
r
o
p
[
#
,
-
1
]
,
{
{
x
,
x
+
1
}
,
{
x
+
1
,
1
}
}
]
&
[
a
l
p
h
a
b
e
t
G
r
a
p
h
H
e
l
p
e
r
[
n
-
1
]
]
]
]
]
I
n
[
]
:
=
a
l
p
h
a
b
e
t
G
r
a
p
h
[
n
_
]
:
=
I
f
[
n
=
=
0
,
{
{
1
,
1
}
,
{
1
,
2
}
}
,
A
p
p
e
n
d
[
a
l
p
h
a
b
e
t
G
r
a
p
h
H
e
l
p
e
r
[
n
]
,
{
1
,
n
+
2
}
]
]
I
n
[
]
:
=
R
e
s
o
u
r
c
e
F
u
n
c
t
i
o
n
[
"
W
o
l
f
r
a
m
M
o
d
e
l
P
l
o
t
"
]
[
#
,
V
e
r
t
e
x
L
a
b
e
l
s
-
>
A
u
t
o
m
a
t
i
c
]
&
/
@
T
a
b
l
e
[
a
l
p
h
a
b
e
t
G
r
a
p
h
[
n
]
,
{
n
,
0
,
8
}
]
O
u
t
[
]
=
,
,
,
,
,
,
,
,
I
n
[
]
:
=
s
t
a
t
e
G
r
a
p
h
[
n
_
]
:
=
J
o
i
n
[
T
a
b
l
e
[
{
i
,
i
}
,
{
i
,
1
,
n
+
3
}
]
,
T
a
b
l
e
[
{
i
,
i
+
1
}
,
{
i
,
1
,
n
+
3
-
1
}
]
,
{
{
n
+
3
,
1
}
}
]
I
n
[
]
:
=
g
r
a
p
h
T
o
H
y
p
e
r
g
r
a
p
h
[
g
_
]
:
=
g
/
.
x
_
I
n
t
e
g
e
r
y
_
I
n
t
e
g
e
r
{
x
,
y
}
I
n
[
]
:
=
c
o
n
c
a
t
e
n
a
t
e
g
r
a
p
h
s
[
g
1
_
,
g
2
_
]
:
=
g
r
a
p
h
T
o
H
y
p
e
r
g
r
a
p
h
[
J
o
i
n
[
{
M
a
x
[
V
e
r
t
e
x
L
i
s
t
[
g
1
]
]
M
a
x
[
V
e
r
t
e
x
L
i
s
t
[
#
]
]
}
,
#
]
]
&
[
E
d
g
e
L
i
s
t
[
G
r
a
p
h
D
i
s
j
o
i
n
t
U
n
i
o
n
[
g
1
,
g
2
]
]
]
I
n
[
]
:
=
R
e
s
o
u
r
c
e
F
u
n
c
t
i
o
n
[
"
W
o
l
f
r
a
m
M
o
d
e
l
P
l
o
t
"
]
[
#
,
V
e
r
t
e
x
L
a
b
e
l
s
-
>
A
u
t
o
m
a
t
i
c
]
&
/
@
T
a
b
l
e
[
s
t
a
t
e
G
r
a
p
h
[
n
]
,
{
n
,
1
,
8
}
]
O
u
t
[
]
=
,
,
,
,
,
,
,
The reason why we have loops on state graphs is to ensure that when we apply rewrite rules, we have no overlapping subhypergraphs with state graphs. Essentially, we have to obey 4 conditions: no 2 distinct state graphs are subgraphs of each other, no 2 distinct alphabet graphs are subgraphs of each other, a state graph is not a subgraph of any alphabet graph, and any alphabet graph is not a subgraph of any state graph. The above encoding satisfies these conditions. The reason why we have starting iterating the state graphs from 3 is preuly due to some edge cases that need to be handled.
A
t
a
p
a
r
t
i
c
u
l
a
r
s
t
e
p
,
t
h
e
r
e
w
i
l
l
b
e
a
l
l
t
y
p
e
s
o
f
r
e
w
r
i
t
e
r
u
l
e
s
.
B
u
t
w
e
o
n
l
y
w
a
n
t
t
o
m
a
k
e
s
u
r
e
o
n
l
y
o
n
e
r
e
w
r
i
t
e
r
u
l
e
w
i
l
l
b
e
a
p
p
l
i
e
d
t
h
a
t
b
r
i
n
g
s
t
h
e
D
F
A
o
n
t
o
t
h
e
f
i
r
s
t
s
t
e
p
.
T
h
a
t
’
s
w
h
e
r
e
w
e
c
a
n
u
s
e
g
r
a
p
h
c
o
n
c
a
t
e
n
a
t
i
o
n
.
F
o
r
i
n
s
t
a
n
c
e
,
l
e
t
’
s
s
a
y
1
i
s
r
e
p
r
e
s
e
n
t
e
d
b
y
h
y
p
e
r
g
r
a
p
h
A
1
a
n
d
0
i
s
r
e
p
r
e
s
e
n
t
e
d
b
y
h
y
p
e
r
g
r
a
p
h
A
0
.
T
h
e
n
,
w
e
w
i
l
l
a
l
s
o
h
a
v
e
h
y
p
e
r
g
r
a
p
h
s
f
o
r
t
h
e
s
t
a
t
e
s
q
0
a
n
d
q
1
r
e
p
r
e
s
e
n
t
e
d
b
y
S
0
a
n
d
S
1
.
T
h
e
n
,
i
f
t
h
e
i
n
p
u
t
t
o
t
h
e
D
F
A
i
s
1
0
1
1
,
t
h
e
n
t
h
e
i
n
p
u
t
h
y
p
e
r
g
r
a
p
h
w
o
u
l
d
(
S
0
χ
A
1
)
⊕
A
0
⊕
A
1
⊕
A
1
w
h
e
r
e
χ
a
n
d
⊕
a
r
e
b
i
n
a
r
y
o
p
e
r
a
t
o
r
s
o
n
t
h
e
s
p
a
c
e
o
f
h
y
p
e
r
g
r
a
p
h
s
.
T
h
e
n
,
t
h
e
f
i
r
s
t
t
r
a
n
s
i
t
i
o
n
f
u
n
c
t
i
o
n
a
p
p
l
i
e
d
w
o
u
l
d
b
e
δ
(
q
0
,
1
)
.
T
h
e
r
e
s
h
o
u
l
d
b
e
a
r
e
w
r
i
t
e
r
u
l
e
R
0
1
c
o
r
r
e
s
p
o
n
d
i
n
g
t
o
t
h
i
s
t
r
a
n
s
i
t
i
o
n
f
u
n
c
t
i
o
n
t
h
a
t
b
r
i
n
g
s
t
h
e
h
y
p
e
r
g
r
a
p
h
(
S
0
χ
A
1
)
⊕
A
0
⊕
A
1
⊕
A
1
i
n
t
o
(
S
1
χ
A
0
)
⊕
A
1
⊕
A
1
.
A
n
d
s
o
o
n
.
T
h
e
r
u
l
e
c
o
r
r
e
s
p
o
n
d
i
n
g
t
o
t
h
e
t
r
a
n
s
i
t
i
o
n
f
u
n
c
t
i
o
n
δ
(
q
i
,
_
)
s
h
o
u
l
d
a
p
p
l
y
i
f
f
t
h
e
f
i
r
s
t
g
r
a
p
h
i
s
o
f
t
h
e
f
o
r
m
S
i
χ
A
j
w
h
e
r
e
j
c
a
n
b
e
a
n
y
t
h
i
n
g
.
T
h
e
n
,
w
e
h
a
v
e
a
f
i
n
i
t
e
r
e
w
r
i
t
e
r
u
l
e
a
s
w
e
l
l
t
h
a
t
a
p
p
l
i
e
s
t
o
a
n
y
s
t
a
t
e
g
r
a
p
h
b
u
t
n
o
t
a
n
y
o
t
h
e
r
g
r
a
p
h
.
T
h
u
s
,
w
e
h
a
v
e
t
o
s
a
t
i
s
f
y
t
h
e
f
o
l
l
o
w
i
n
g
c
o
n
d
i
t
i
o
n
s
:
◼
A
n
y
s
t
a
t
e
g
r
a
p
h
S
i
s
h
o
u
l
d
n
o
t
b
e
e
q
u
a
l
t
o
o
r
b
e
a
s
u
b
g
r
a
p
h
o
f
o
t
h
e
r
s
t
a
t
e
g
r
a
p
h
S
j
w
h
e
r
e
i
≠
j
.
◼
A
n
y
s
t
a
t
e
g
r
a
p
h
S
i
s
h
o
u
l
d
n
o
t
b
e
e
q
u
a
l
o
r
b
e
a
s
u
b
g
r
a
p
h
o
f
a
n
y
o
t
h
e
r
A
j
w
h
e
r
e
j
≠
i
.
A
l
s
o
,
A
j
s
h
o
u
l
d
n
o
t
b
e
a
s
u
b
g
r
a
p
h
o
f
a
n
y
o
t
h
e
r
s
t
a
t
e
g
r
a
p
h
S
i
.
◼
N
o
t
e
t
h
a
t
t
h
i
s
i
m
p
l
i
e
s
t
h
a
t
n
o
g
r
a
p
h
S
i
χ
A
j
i
s
a
s
u
b
g
r
a
p
h
o
r
i
s
e
q
u
a
l
t
o
t
h
e
g
r
a
p
h
S
k
χ
A
l
w
h
e
r
e
(
i
,
j
)
≠
(
k
,
l
)
.
◼
T
h
e
r
u
l
e
R
i
j
,
c
o
r
r
e
s
p
o
n
d
i
n
g
t
o
t
h
e
t
r
a
n
s
i
t
i
o
n
f
u
n
c
t
i
o
n
δ
(
i
,
j
)
i
.
e
.
w
h
e
n
i
n
s
t
a
t
e
q
i
a
n
d
a
l
p
h
a
b
e
t
j
s
h
o
u
l
d
a
p
p
l
y
i
f
f
t
h
e
h
y
p
e
r
g
r
a
p
h
i
s
o
f
t
h
e
f
o
r
m
S
i
χ
A
j
⊕
A
i
1
⊕
A
i
2
⊕
A
i
3
⊕
.
.
I
n
[
]
:
=
c
o
n
c
a
t
e
n
a
t
e
g
r
a
p
h
s
[
g
1
_
,
g
2
_
]
:
=
g
r
a
p
h
T
o
H
y
p
e
r
g
r
a
p
h
[
J
o
i
n
[
{
M
a
x
[
V
e
r
t
e
x
L
i
s
t
[
g
1
]
]
M
a
x
[
V
e
r
t
e
x
L
i
s
t
[
#
]
]
}
,
#
]
]
&
[
E
d
g
e
L
i
s
t
[
G
r
a
p
h
D
i
s
j
o
i
n
t
U
n
i
o
n
[
g
1
,
g
2
]
]
]
I
n
[
]
:
=
i
n
i
t
i
a
l
W
o
r
d
T
o
H
y
p
e
r
g
r
a
p
h
H
e
l
p
e
r
[
w
o
r
d
_
L
i
s
t
,
a
c
c
_
]
:
=
I
f
[
w
o
r
d
=
=
{
}
,
a
c
c
,
i
n
i
t
i
a
l
W
o
r
d
T
o
H
y
p
e
r
g
r
a
p
h
H
e
l
p
e
r
[
D
r
o
p
[
w
o
r
d
,
1
]
,
c
o
n
c
a
t
e
n
a
t
e
g
r
a
p
h
s
[
a
c
c
,
a
l
p
h
a
b
e
t
G
r
a
p
h
[
w
o
r
d
[
[
1
]
]
]
]
]
]
I
n
[
]
:
=
i
n
i
t
i
a
l
W
o
r
d
T
o
H
y
p
e
r
g
r
a
p
h
[
w
o
r
d
_
L
i
s
t
]
:
=
i
n
i
t
i
a
l
W
o
r
d
T
o
H
y
p
e
r
g
r
a
p
h
H
e
l
p
e
r
[
D
r
o
p
[
w
o
r
d
,
1
]
,
c
o
n
c
a
t
e
n
a
t
e
g
r
a
p
h
s
[
s
t
a
t
e
G
r
a
p
h
[
0
]
,
a
l
p
h
a
b
e
t
G
r
a
p
h
[
w
o
r
d
[
[
1
]
]
]
]
]
I
n
[
]
:
=
G
r
a
p
h
[
i
n
i
t
i
a
l
W
o
r
d
T
o
H
y
p
e
r
g
r
a
p
h
[
{
0
,
1
,
2
,
3
,
4
,
5
,
5
,
2
,
2
,
3
}
]
,
V
e
r
t
e
x
L
a
b
e
l
s
-
>
A
u
t
o
m
a
t
i
c
]
O
u
t
[
]
=
So we can convert any word into an initial hypergraph.
I
n
[
]
:
=
D
F
A
R
u
l
e
[
{
s
t
a
t
e
I
_
,
a
l
p
h
a
b
e
t
_
,
s
t
a
t
e
F
_
}
]
:
=
(
(
#
-
>
R
e
p
l
a
c
e
A
l
l
[
s
t
a
t
e
G
r
a
p
h
[
s
t
a
t
e
F
]
,
J
o
i
n
[
T
a
b
l
e
[
i
-
>
M
a
x
[
V
e
r
t
e
x
L
i
s
t
[
#
]
]
+
i
,
{
i
,
1
,
s
t
a
t
e
F
+
3
-
1
}
]
,
{
s
t
a
t
e
F
+
3
-
>
M
a
x
[
V
e
r
t
e
x
L
i
s
t
[
#
]
]
}
]
]
)
&
)
[
c
o
n
c
a
t
e
n
a
t
e
g
r
a
p
h
s
[
s
t
a
t
e
G
r
a
p
h
[
s
t
a
t
e
I
]
,
a
l
p
h
a
b
e
t
G
r
a
p
h
[
a
l
p
h
a
b
e
t
]
]
]
The following represents
δ
(
q
1
,
2
)
=
q
0
:
I
n
[
]
:
=
R
u
l
e
P
l
o
t
[
R
e
s
o
u
r
c
e
F
u
n
c
t
i
o
n
[
"
W
o
l
f
r
a
m
M
o
d
e
l
"
]
[
D
F
A
R
u
l
e
[
{
1
,
2
,
0
}
]
]
,
V
e
r
t
e
x
L
a
b
e
l
s
-
>
A
u
t
o
m
a
t
i
c
]
O
u
t
[
]
=
The preserved vertex is the link between the symbol (in
Σ
) to the left and the symbol to the right (in
Σ
).
I
n
[
]
:
=
D
F
A
t
o
H
y
p
e
r
g
r
a
p
h
[
r
u
l
e
s
_
L
i
s
t
,
w
o
r
d
_
]
:
=
R
e
s
o
u
r
c
e
F
u
n
c
t
i
o
n
[
"
W
o
l
f
r
a
m
M
o
d
e
l
"
]
[
T
a
b
l
e
[
D
F
A
R
u
l
e
[
i
]
,
{
i
,
M
a
p
[
J
o
i
n
[
#
[
[
1
]
]
,
#
[
[
2
]
]
]
&
,
(
r
u
l
e
s
/
.
R
u
l
e
-
>
L
i
s
t
)
]
}
]
,
i
n
i
t
i
a
l
W
o
r
d
T
o
H
y
p
e
r
g
r
a
p
h
[
w
o
r
d
]
,
L
e
n
g
t
h
[
w
o
r
d
]
,
"
E
v
e
n
t
s
S
t
a
t
e
s
P
l
o
t
s
L
i
s
t
"
]
I
n
[
]
:
=
D
F
A
t
o
H
y
p
e
r
g
r
a
p
h
[
{
{
0
,
0
}
-
>
{
0
}
,
{
0
,
1
}
-
>
{
1
}
,
{
1
,
0
}
-
>
{
0
}
,
{
1
,
1
}
-
>
{
1
}
}
,
{
0
,
1
,
1
,
1
,
1
,
1
,
0
}
]
O
u
t
[
]
=
,
,
,
,
,
,
,
The DFA is constructed in such a way so that if the final symbol is 0, then the final state is 0. And if the final symbol is 1, the final state is 1. We can see that the above holds.
I
n
[
]
:
=
D
F
A
t
o
H
y
p
e
r
g
r
a
p
h
[
{
{
0
,
0
}
-
>
{
0
}
,
{
0
,
1
}
-
>
{
1
}
,
{
1
,
0
}
-
>
{
0
}
,
{
1
,
1
}
-
>
{
1
}
}
,
{
0
,
1
,
1
,
0
,
1
,
1
,
1
}
]
O
u
t
[
]
=
,
,
,
,
,
,
,
So, DFAs are converted to Turing machines in a pretty direct manner. The state of a Turing machine is represented by a subgraph that attaches itself to the symbol it is currently reading, and then moves to the right to the next symbol. It continues to do so until there are no symbols left to read, and we end up with the final state.
One may ask: why do we need a state subgraph? All models of finite state machines, Turing machines, and register machines are sequential in nature. So states in these models are used to impose a definite order of evaluation. In hypergraph rewriting systems, the analogy to a state is this subgraphs that moves along the hypergraph to dictate the application of a rule, to effectively impose a definite updating order. From here, it is also pretty easy to implement Turing machines because it is similar to a DFA, except that the state subgraphs can move either to the left or right, and can change the symbols.
S
u
b
s
e
c
t
i
o
n
2
.
N
F
A
s
Section 2. Converting Turing-complete Imperative machines into Hypergraph rewriting
Subsection 1. Turing Machines
We use the same approach as in DFAs.
I
n
[
]
:
=
i
n
i
t
i
a
l
T
a
p
e
T
o
H
y
p
e
r
g
r
a
p
h
[
w
o
r
d
_
L
i
s
t
]
:
=
M
o
d
u
l
e
[
{
x
,
y
,
z
}
,
x
=
i
n
i
t
i
a
l
W
o
r
d
T
o
H
y
p
e
r
g
r
a
p
h
[
w
o
r
d
]
;
y
=
M
a
x
[
V
e
r
t
e
x
L
i
s
t
[
c
o
n
c
a
t
e
n
a
t
e
g
r
a
p
h
s
[
s
t
a
t
e
G
r
a
p
h
[
0
]
,
a
l
p
h
a
b
e
t
G
r
a
p
h
[
w
o
r
d
[
[
1
]
]
]
]
]
]
;
z
=
M
a
x
[
V
e
r
t
e
x
L
i
s
t
[
x
]
]
;
J
o
i
n
[
x
,
{
{
y
,
z
+
1
,
z
+
2
}
,
{
z
,
z
+
3
,
z
+
4
,
z
+
5
}
}
]
]
I
n
[
]
:
=
R
e
s
o
u
r
c
e
F
u
n
c
t
i
o
n
[
"
W
o
l
f
r
a
m
M
o
d
e
l
P
l
o
t
"
]
[
i
n
i
t
i
a
l
T
a
p
e
T
o
H
y
p
e
r
g
r
a
p
h
[
{
0
,
1
,
2
}
]
,
V
e
r
t
e
x
L
a
b
e
l
s
-
>
A
u
t
o
m
a
t
i
c
]
O
u
t
[
]
=
The hyperedges at the ends indicate the left and right ends of the tape. I have introduced those so that if the head wants to move to the right, beyond the currently defined tape, there will be a special rewrite rule that will create the 0 symbol and join it to the existing tape, extending the right hyperdge to indicate the new end of the tape. Same goes for the left hyperedge.
I
n
[
]
:
=
T
u
r
i
n
g
R
u
l
e
[
{
s
t
a
t
e
I
_
,
s
y
m
b
o
l
I
_
}
-
>
{
s
t
a
t
e
F
_
,
s
y
m
b
o
l
F
_
}
]
:
=
{
#
-
>
J
o
i
n
[
J
o
i
n
S
t
a
t
e
[
M
a
x
[
V
e
r
t
e
x
L
i
s
t
[
#
]
]
,
M
a
x
[
V
e
r
t
e
x
L
i
s
t
[
#
]
]
,
s
t
a
t
e
F
]
,
J
o
i
n
A
l
p
h
a
b
e
t
[
s
t
a
t
e
F
+
3
+
M
a
x
[
V
e
r
t
e
x
L
i
s
t
[
#
]
]
,
M
a
x
[
V
e
r
t
e
x
L
i
s
t
[
#
]
]
,
s
y
m
b
o
l
F
]
]
}
&
[
c
o
n
c
a
t
e
n
a
t
e
g
r
a
p
h
s
[
s
t
a
t
e
G
r
a
p
h
[
s
t
a
t
e
I
]
,
a
l
p
h
a
b
e
t
G
r
a
p
h
[
s
y
m
b
o
l
I
]
]
]
I
n
[
]
:
=
T
u
r
i
n
g
R
u
l
e
[
{
s
t
a
t
e
I
_
,
s
y
m
b
o
l
I
_
}
-
>
{
s
t
a
t
e
F
_
,
s
y
m
b
o
l
F
_
,
o
f
f
s
e
t
_
}
]
:
=
I
f
[
o
f
f
s
e
t
=
=
0
,
T
u
r
i
n
g
R
u
l
e
[
{
s
t
a
t
e
I
,
s
y
m
b
o
l
I
}
-
>
{
s
t
a
t
e
F
,
s
y
m
b
o
l
F
}
]
,
M
o
d
u
l
e
[
{
x
=
M
a
x
[
V
e
r
t
e
x
L
i
s
t
[
#
]
]
}
,
I
f
[
o
f
f
s
e
t
=
=
1
,
(
*
t
h
e
e
d
g
e
c
a
s
e
s
-
-
-
l
i
t
e
r
a
l
l
y
*
)
{
J
o
i
n
[
#
,
{
{
x
,
x
+
1
}
}
]
-
>
J
o
i
n
[
J
o
i
n
S
t
a
t
e
[
x
+
1
,
x
+
1
,
s
t
a
t
e
F
]
,
J
o
i
n
A
l
p
h
a
b
e
t
[
s
t
a
t
e
F
+
3
+
x
+
1
,
x
,
s
y
m
b
o
l
F
]
,
{
{
x
,
x
+
1
}
}
]
,
J
o
i
n
[
#
,
{
{
x
,
x
+
1
,
x
+
2
,
x
+
3
}
}
]
-
>
J
o
i
n
[
J
o
i
n
S
t
a
t
e
[
x
+
4
,
x
+
4
,
s
t
a
t
e
F
]
,
J
o
i
n
A
l
p
h
a
b
e
t
[
s
t
a
t
e
F
+
3
+
x
+
4
,
x
,
s
y
m
b
o
l
F
]
,
J
o
i
n
A
l
p
h
a
b
e
t
[
s
t
a
t
e
F
+
3
+
x
+
4
+
s
y
m
b
o
l
F
+
2
,
x
+
4
,
0
]
,
{
{
x
,
x
+
4
}
,
{
x
+
4
,
x
+
4
+
s
t
a
t
e
F
+
3
+
s
y
m
b
o
l
F
+
6
,
x
+
4
+
s
t
a
t
e
F
+
3
+
s
y
m
b
o
l
F
+
7
,
x
+
4
+
s
t
a
t
e
F
+
3
+
s
y
m
b
o
l
F
+
8
}
}
]
}
,
I
f
[
o
f
f
s
e
t
=
=
-
1
,
{
J
o
i
n
[
#
,
{
{
x
+
1
,
x
}
}
]
-
>
J
o
i
n
[
J
o
i
n
S
t
a
t
e
[
x
+
1
,
x
+
1
,
s
t
a
t
e
F
]
,
J
o
i
n
A
l
p
h
a
b
e
t
[
s
t
a
t
e
F
+
3
+
x
+
1
,
M
a
x
[
V
e
r
t
e
x
L
i
s
t
[
#
]
]
,
s
y
m
b
o
l
F
]
,
{
{
x
+
1
,
x
}
}
]
,
J
o
i
n
[
#
,
{
{
x
,
x
+
1
,
x
+
2
}
}
]
-
>
J
o
i
n
[
J
o
i
n
S
t
a
t
e
[
x
+
5
,
x
+
3
,
s
t
a
t
e
F
]
,
(
*
j
o
i
n
t
h
e
s
t
a
t
e
a
t
x
+
3
*
)
J
o
i
n
A
l
p
h
a
b
e
t
[
s
t
a
t
e
F
+
3
+
x
+
5
,
x
,
s
y
m
b
o
l
F
]
,
(
*
*
)
J
o
i
n
A
l
p
h
a
b
e
t
[
s
t
a
t
e
F