WolframAlpha.com
WolframCloud.com
All Sites & Public Resources...
Products & Services
Wolfram|One
Mathematica
Wolfram|Alpha Notebook Edition
Finance Platform
System Modeler
Wolfram Player
Wolfram Engine
WolframScript
Enterprise Private Cloud
Application Server
Enterprise Mathematica
Wolfram|Alpha Appliance
Enterprise Solutions
Corporate Consulting
Technical Consulting
Wolfram|Alpha Business Solutions
Resource System
Data Repository
Neural Net Repository
Function Repository
Wolfram|Alpha
Wolfram|Alpha Pro
Problem Generator
API
Data Drop
Products for Education
Mobile Apps
Wolfram Player
Wolfram Cloud App
Wolfram|Alpha for Mobile
Wolfram|Alpha-Powered Apps
Services
Paid Project Support
Wolfram U
Summer Programs
All Products & Services »
Technologies
Wolfram Language
Revolutionary knowledge-based programming language.
Wolfram Cloud
Central infrastructure for Wolfram's cloud products & services.
Wolfram Science
Technology-enabling science of the computational universe.
Wolfram Notebooks
The preeminent environment for any technical workflows.
Wolfram Engine
Software engine implementing the Wolfram Language.
Wolfram Natural Language Understanding System
Knowledge-based broadly deployed natural language.
Wolfram Data Framework
Semantic framework for real-world data.
Wolfram Universal Deployment System
Instant deployment across cloud, desktop, mobile, and more.
Wolfram Knowledgebase
Curated computable knowledge powering Wolfram|Alpha.
All Technologies »
Solutions
Engineering, R&D
Aerospace & Defense
Chemical Engineering
Control Systems
Electrical Engineering
Image Processing
Industrial Engineering
Mechanical Engineering
Operations Research
More...
Finance, Statistics & Business Analysis
Actuarial Sciences
Bioinformatics
Data Science
Econometrics
Financial Risk Management
Statistics
More...
Education
All Solutions for Education
Trends
Machine Learning
Multiparadigm Data Science
Internet of Things
High-Performance Computing
Hackathons
Software & Web
Software Development
Authoring & Publishing
Interface Development
Web Development
Sciences
Astronomy
Biology
Chemistry
More...
All Solutions »
Learning & Support
Learning
Wolfram Language Documentation
Fast Introduction for Programmers
Wolfram U
Videos & Screencasts
Wolfram Language Introductory Book
Webinars & Training
Summer Programs
Books
Need Help?
Support FAQ
Wolfram Community
Contact Support
Premium Support
Paid Project Support
Technical Consulting
All Learning & Support »
Company
About
Company Background
Wolfram Blog
Events
Contact Us
Work with Us
Careers at Wolfram
Internships
Other Wolfram Language Jobs
Initiatives
Wolfram Foundation
MathWorld
Computer-Based Math
A New Kind of Science
Wolfram Technology for Hackathons
Student Ambassador Program
Wolfram for Startups
Demonstrations Project
Wolfram Innovator Awards
Wolfram + Raspberry Pi
Summer Programs
More...
All Company »
Search
WOLFRAM COMMUNITY
Connect with users of Wolfram technologies to learn, solve problems and share ideas
Join
Sign In
Dashboard
Groups
People
Message Boards
Answer
(
Unmark
)
Mark as an Answer
GROUPS:
Staff Picks
Computer Science
Graphics and Visualization
Graphs and Networks
Wolfram Language
Wolfram High School Summer Camp
6
Rohan Mehta
[WSC21] Investigating the pi calculus and Petri nets
Rohan Mehta, Moravian Academy Upper School
Posted
1 year ago
2177 Views
|
1 Reply
|
6 Total Likes
Follow this post
|
Investigating the pi calculus and Petri nets
by
Rohan Mehta
I first heard mention of the
π
-calculus a few years ago, and it really piqued my interest . After all, everyone’ s heard of the
λ
-calculus, but the
π
-calculus? For me, at least, it was completely new. Unfortunately, any information I could find about it online was basically impossible to understand, buried under so much math and notation that I soon gave up my search. However, when it came up as a potential project idea, I felt an itch to try and understand it again, this time armed with the powerful computational tools of the Wolfram Language. I wanted to create a visual and intuitive way to understand what it was really saying, and hopefully connect it to other forms of concurrent computation, like Petri Nets, on a visual (rather than mathematical) level. While my intuitions about how all of these ideas fit together are still somewhat in flux, I think I have discovered a rather compelling visual alternative to how these structures are traditionally understood .
π
-calculus 101
The
π
-calculus is the slightly more esoteric (and yet, far more practical!) cousin to the well-known
λ
-calculus (which is itself rather aptly described as the more theoretically-inclined twin of the Turing Machine). However, instead of describing sequential computation (like the
λ
-calculus), the
π
-calculus specializes in concurrent computation. It can represent systems in which potentially infinitely many agents are communicating with one another along potentially infinitely many channels. In other words, everything from a client querying a server, to the innumerable interactions between the proteins in our cell's biological pathways, fall within its reach. And just like in the
λ
-calculus, all this expressive power comes from composing almost suspiciously simple (and, in hindsight, extremely obvious) operations together. In fact, the most basic
π
-calculus programs boil down to nothing more than three things:
◼
Names
(the most obfuscating aspect of the
π
-calculus, names act simultaneously as communication channels
and
data; I like to think of them as addresses; they can refer to a physical place that data can be sent to, or a piece of information used to locate that place)
◼
Send commands
(pretty self-explanatory, a Send command sends a name to another name)
◼
Receive commands
(again, not too cryptic: a Receive command receives a name from another name)
A
"
M
i
l
d
"
π
-
C
a
l
c
u
l
u
s
P
r
o
g
r
a
m
A
"
M
e
d
i
u
m
"
π
-
C
a
l
c
u
l
u
s
P
r
o
g
r
a
m
A
"
S
p
i
c
y
"
π
-
C
a
l
c
u
l
u
s
P
r
o
g
r
a
m
Evaluating a
π
-calculus Program
Now that we've expressed the syntax we will be using, it's time to consider how we might evaluate a
π
-calculus program. It's actually surprisingly straightforward. The first thing to notice is that for most
π
-calculus programs, there won' t be a single, definitive order in which to evaluate all the processes that the program consists of. For example, if two proceses start off by sending some information along the same channel, then one has to go before the other, since a single channel can't accommodate two pieces of information at the same time . But while it would be equally valid to evaluate either of these processes first, choosing to evaluate one before the other might lead to a different eventual outcome .
But what if two processes don't interfere with one another? What if they send information along two separate channels, or replicate themselves? Then they can be executed concurrently. For our purposes, though, it won't be necessary to actually evaluate a given
π
-calculus program in parallel – we will get the same result even if we evaluate the program one process at a time . Obviously the whole point of
π
-calculus programs is that they are supposed to be executed in parallel, over a network of devices (not just on a single machine like we're doing now) . What we're doing here is somewhat more pedagogical, not exactly trying to build a useful program in the
π
-calculus, but instead considering how they work, how we might be able to better visualize and understand them, and, in a NKS-like spirit, trying to see whether even simple
π
-calculus programs are still capable of exhibiting complicated behavior.
C
h
o
o
s
i
n
g
a
P
r
o
c
e
s
s
t
o
E
v
a
l
u
a
t
e
S
e
n
d
i
n
g
a
n
d
R
e
c
e
i
v
i
n
g
U
p
d
a
t
i
n
g
t
h
e
P
r
o
g
r
a
m
Let's Evaluate Some
π
-calculus Programs!
Now can finally try our hand at evaluating a
π
-calculus program!
I
n
[
]
:
=
s
i
m
p
l
e
P
r
o
g
r
a
m
s
=
{
m
i
l
d
P
r
o
g
r
a
m
,
m
e
d
i
u
m
P
r
o
g
r
a
m
}
;
Reduce our terminating programs
r
e
d
u
c
t
i
o
n
R
u
l
e
/
@
s
i
m
p
l
e
P
r
o
g
r
a
m
s
O
u
t
[
]
=
{
{
{
{
P
i
S
e
n
d
[
z
,
x
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
y
,
x
]
,
P
i
S
e
n
d
[
x
,
y
]
,
P
i
R
e
c
e
i
v
e
[
y
,
x
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
v
,
z
]
,
P
i
S
e
n
d
[
v
,
v
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
}
,
{
{
P
i
S
e
n
d
[
x
,
z
]
,
P
i
R
e
c
e
i
v
e
[
z
,
x
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
v
,
z
]
,
P
i
S
e
n
d
[
v
,
v
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
}
,
{
{
P
i
R
e
c
e
i
v
e
[
z
,
x
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
S
e
n
d
[
x
,
x
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
}
}
,
{
{
{
{
P
i
S
e
n
d
[
x
,
b
]
}
|
|
{
P
i
S
e
n
d
[
x
,
e
]
}
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
x
,
b
]
,
P
i
S
e
n
d
[
x
,
c
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
x
,
c
]
,
{
P
i
S
e
n
d
[
x
,
d
]
}
|
|
{
P
i
S
e
n
d
[
x
,
f
2
]
}
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
x
,
e
]
,
P
i
S
e
n
d
[
x
,
f
1
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
{
P
i
R
e
c
e
i
v
e
[
x
,
f
1
]
}
|
|
{
P
i
R
e
c
e
i
v
e
[
x
,
f
2
]
}
,
P
i
S
e
n
d
[
x
,
g
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
x
,
d
]
,
P
i
S
e
n
d
[
x
,
h
1
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
x
,
g
]
,
P
i
S
e
n
d
[
x
,
h
2
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
{
P
i
R
e
c
e
i
v
e
[
x
,
h
1
]
}
|
|
{
P
i
R
e
c
e
i
v
e
[
x
,
h
2
]
}
,
P
i
S
e
n
d
[
x
,
i
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
}
,
{
{
P
i
R
e
c
e
i
v
e
[
x
,
b
]
,
P
i
S
e
n
d
[
x
,
c
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
x
,
c
]
,
{
P
i
S
e
n
d
[
x
,
d
]
}
|
|
{
P
i
S
e
n
d
[
x
,
f
2
]
}
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
S
e
n
d
[
x
,
f
1
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
{
P
i
R
e
c
e
i
v
e
[
x
,
f
1
]
}
|
|
{
P
i
R
e
c
e
i
v
e
[
x
,
f
2
]
}
,
P
i
S
e
n
d
[
x
,
g
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
x
,
d
]
,
P
i
S
e
n
d
[
x
,
h
1
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
x
,
g
]
,
P
i
S
e
n
d
[
x
,
h
2
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
{
P
i
R
e
c
e
i
v
e
[
x
,
h
1
]
}
|
|
{
P
i
R
e
c
e
i
v
e
[
x
,
h
2
]
}
,
P
i
S
e
n
d
[
x
,
i
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
}
,
{
{
P
i
R
e
c
e
i
v
e
[
x
,
b
]
,
P
i
S
e
n
d
[
x
,
c
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
x
,
c
]
,
{
P
i
S
e
n
d
[
x
,
d
]
}
|
|
{
P
i
S
e
n
d
[
x
,
f
2
]
}
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
S
e
n
d
[
x
,
g
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
x
,
d
]
,
P
i
S
e
n
d
[
x
,
h
1
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
x
,
g
]
,
P
i
S
e
n
d
[
x
,
h
2
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
{
P
i
R
e
c
e
i
v
e
[
x
,
h
1
]
}
|
|
{
P
i
R
e
c
e
i
v
e
[
x
,
h
2
]
}
,
P
i
S
e
n
d
[
x
,
i
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
}
,
{
{
P
i
R
e
c
e
i
v
e
[
x
,
b
]
,
P
i
S
e
n
d
[
x
,
c
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
x
,
c
]
,
{
P
i
S
e
n
d
[
x
,
d
]
}
|
|
{
P
i
S
e
n
d
[
x
,
f
2
]
}
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
x
,
d
]
,
P
i
S
e
n
d
[
x
,
h
1
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
S
e
n
d
[
x
,
h
2
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
{
P
i
R
e
c
e
i
v
e
[
x
,
h
1
]
}
|
|
{
P
i
R
e
c
e
i
v
e
[
x
,
h
2
]
}
,
P
i
S
e
n
d
[
x
,
i
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
}
,
{
{
P
i
R
e
c
e
i
v
e
[
x
,
b
]
,
P
i
S
e
n
d
[
x
,
c
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
x
,
c
]
,
{
P
i
S
e
n
d
[
x
,
d
]
}
|
|
{
P
i
S
e
n
d
[
x
,
f
2
]
}
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
x
,
d
]
,
P
i
S
e
n
d
[
x
,
h
1
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
S
e
n
d
[
x
,
i
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
}
,
{
{
P
i
R
e
c
e
i
v
e
[
x
,
b
]
,
P
i
S
e
n
d
[
x
,
c
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
x
,
c
]
,
{
P
i
S
e
n
d
[
x
,
d
]
}
|
|
{
P
i
S
e
n
d
[
x
,
f
2
]
}
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
,
{
P
i
R
e
c
e
i
v
e
[
x
,
d
]
,
P
i
S
e
n
d
[
x
,
h
1
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
}
}
}
Reduce our non-terminating program by 100 steps
I
n
[
]
:
=
r
e
d
u
c
t
i
o
n
R
u
l
e
[
s
p
i
c
y
P
r
o
g
r
a
m
,
1
0
0
]
O
u
t
[
]
=
{
{
P
i
R
e
p
l
i
c
a
t
e
[
{
{
P
i
S
e
n
d
[
x
,
b
]
}
|
|
{
P
i
S
e
n
d
[
x
,
e
]
}
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
]
,
P
i
R
e
p
l
i
c
a
t
e
[
{
P
i
R
e
c
e
i
v
e
[
x
,
b
]
,
P
i
S
e
n
d
[
x
,
c
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
]
,
P
i
R
e
p
l
i
c
a
t
e
[
{
P
i
R
e
c
e
i
v
e
[
x
,
c
]
,
{
P
i
S
e
n
d
[
x
,
d
]
}
|
|
{
P
i
S
e
n
d
[
x
,
f
2
]
}
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
]
,
P
i
R
e
p
l
i
c
a
t
e
[
{
P
i
R
e
c
e
i
v
e
[
x
,
e
]
,
P
i
S
e
n
d
[
x
,
f
1
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
]
,
P
i
R
e
p
l
i
c
a
t
e
[
{
{
P
i
R
e
c
e
i
v
e
[
x
,
f
1
]
}
|
|
{
P
i
R
e
c
e
i
v
e
[
x
,
f
2
]
}
,
P
i
S
e
n
d
[
x
,
g
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
]
,
P
i
R
e
p
l
i
c
a
t
e
[
{
P
i
R
e
c
e
i
v
e
[
x
,
d
]
,
P
i
S
e
n
d
[
x
,
h
1
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
]
,
P
i
R
e
p
l
i
c
a
t
e
[
{
P
i
R
e
c
e
i
v
e
[
x
,
g
]
,
P
i
S
e
n
d
[
x
,
h
2
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
]
,
P
i
R
e
p
l
i
c
a
t
e
[
{
{
P
i
R
e
c
e
i
v
e
[
x
,
h
1
]
}
|
|
{
P
i
R
e
c
e
i
v
e
[
x
,
h
2
]
}
,
P
i
S
e
n
d
[
x
,
i
]
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
]
}
,
⋯
9
9
⋯
,
{
P
i
R
e
p
l
i
c
a
t
e
[
{
{
P
i
S
e
n
d
[
x
,
b
]
}
|
|
{
P
i
S
e
n
d
[
x
,
e
]
}
,
P
i
T
e
r
m
i
n
a
t
e
[
]
}
]
,
⋯
3
2
⋯
,
{
P
i
R
e
c
e
i
v
e
[
x
,
b
]
,
⋯
1
⋯
,
⋯
1
⋯
}
}
}
l
a
r
g
e
o
u
t
p
u
t
s
h
o
w
l
e
s
s
s
h
o
w
m
o
r
e
s
h
o
w
a
l
l
s
e
t
s
i
z
e
l
i
m
i
t
.
.
.
Converting to Canonical
π
-Calculus Syntax
Well....it looked like everything worked, but the result is pretty impossible to parse. There must be a better way to visualize what's really happening here. While the syntax we've chosen is very intuitive, it's also quite verbose, and pretty difficult to work with in large quantities. As such, it may be worthwhile to convert from our custom syntax to the more concise, canonical
π
-calculus syntax, so that we can visualize things a bit more succinctly.
To do this, we need to map the various elements of our syntax to their representations in the canonical
π
-calculus syntax. In other words, our versions of "Send", "Receive", "Choice", etc. need to get converted to their canonical counterparts. Thanks to the Wolfram Language's powerful pattern matching, this is actually pretty easy!
Convert custom
π
-calculus syntax to canonical syntax for a single process
p
r
o
c
e
s
s
P
r
e
t
t
y
P
r
i
n
t
[
p
r
o
c
e
s
s
_
]
:
=
M
o
d
u
l
e
[
{
r
e
p
l
i
c
a
t
e
C
o
n
v
e
r
t
,
o
r
C
o
n
v
e
r
t
,
s
e
n
d
C
o
n
v
e
r
t
,
r
e
c
e
i
v
e
C
o
n
v
e
r
t
,
c
o
n
v
e
r
t
e
d
}
,
r
e
p
l
i
c
a
t
e
C
o
n
v
e
r
t
=
P
i
R
e
p
l
i
c
a
t
e
[
{
a
_
_
}
]
:
>
"
!
"
<
>
"
(
"
<
>
p
r
o
c
e
s
s
P
r
e
t
t
y
P
r
i
n
t
[
{
a
}
]
<
>
"
)
"
;
o
r
C
o
n
v
e
r
t
=
a
_
_
|
|
b
_
_
:
>
(
p
r
o
c
e
s
s
P
r
e
t
t
y
P
r
i
n
t
[
a
]
<
>
"
+
"
<
>
p
r
o
c
e
s
s
P
r
e
t
t
y
P
r
i
n
t
[
b
]
)
;
s
e
n
d
C
o
n
v
e
r
t
=
P
i
S
e
n
d
[
s
e
n
t
_
,
t
o
_
]
:
>
(
T
o
S
t
r
i
n
g
@
t
o
<
>
"
〈
"
<
>
T
o
S
t
r
i
n
g
@
s
e
n
t
<
>
"
〉
"
)
;
r
e
c
e
i
v
e
C
o
n
v
e
r
t
=
P
i
R
e
c
e
i
v
e
[
r
e
c
e
i
v
e
d
_
,
a
t
_
]
:
>
(
T
o
S
t
r
i
n
g
@
a
t
<
>
"
(
"
<
>
T
o
S
t
r
i
n
g
@
r
e
c
e
i
v
e
d
<
>
"
)
"
)
;
c
o
n
v
e
r
t
e
d
=
p
r
o
c
e
s
s
/
.
{
s
e
n
d
C
o
n
v
e
r
t
,
r
e
c
e
i
v
e
C
o
n
v
e
r
t
,
o
r
C
o
n
v
e
r
t
,
r
e
p
l
i
c
a
t
e
C
o
n
v
e
r
t
,
P
i
T
e
r
m
i
n
a
t
e
[
]
-
>
"
0
"
}
;
I
f
[
c
o
n
v
e
r
t
e
d
[
[
0
]
]
=
=
=
L
i
s
t
,
S
t
r
i
n
g
R
i
f
f
l
e
[
c
o
n
v
e
r
t
e
d
,
"
.
"
]
,
c
o
n
v
e
r
t
e
d
]
]
Convert custom
π
-calculus syntax to canonical syntax for an entire program, and then style it
I
n
[
]
:
=
p
r
o
c
e
s
s
e
s
P
r
e
t
t
y
P
r
i
n
t
[
p
r
o
c
e
s
s
e
s
_
,
c
o
l
o
r
_
]
:
=
F
r
a
m
e
d
[
C
o
l
u
m
n
[
S
t
y
l
e
[
F
r
a
m
e
d
[
p
r
o
c
e
s
s
P
r
e
t
t
y
P
r
i
n
t
[
#
]
<
>
"
|
"
]
,
B
a
c
k
g
r
o
u
n
d
-
>
c
o
l
o
r
]
&
/
@
p
r
o
c
e
s
s
e
s
]
]
I
n
[
]
:
=
l
i
g
h
t
C
o
l
o
r
L
i
s
t
=
L
i
s
t
I
c
o
n
;
Now we can see what the output of our
π
-calculus interpreter looks like in the canonical
π
-calculus syntax! Note that what is shown here is either irreducible, or reduces to nothing. If the expression reduces to nothing, then technically it has not be fully evaluated. But if we did fully evaluate it, there would be nothing to visualize!
Display the fully-evaluated form of our first two programs in the canonical syntax
I
n
[
]
:
=
p
r
o
c
e
s
s
e
s
P
r
e
t
t
y
P
r
i
n
t
[
#
[
[
-
1
]
]
,
R
a
n
d
o
m
C
h
o
i
c
e
@
l
i
g
h
t
C
o
l
o
r
L
i
s
t
]
&
/
@
r
e
d
u
c
t
i
o
n
R
u
l
e
/
@
s
i
m
p
l
e
P
r
o
g
r
a
m
s
/
/
R
o
w
[
#
,
"
"
]
&
O
u
t
[
]
=
x
(
z
)
.
0
|
x
〈
x
〉
.
0
|
b
(
x
)
.
c
〈
x
〉
.
0
|
c
(
x
)
.
d
〈
x
〉
+
f
2
〈
x
〉
.
0
|
d
(
x
)
.
h
1
〈
x
〉
.
0
|
Much better! We can do the same thing with our non-terminating program, too. Let's see what it looks like after five reductions!
Display the fully-evaluated form of our third, non-terminating program in the canonical syntax
I
n
[
]
:
=
p
r
o
c
e
s
s
e
s
P
r
e
t
t
y
P
r
i
n
t
[
#
[
[
-
1
]
]
,
R
a
n
d
o
m
C
h
o
i
c
e
@
l
i
g
h
t
C
o
l
o
r
L
i
s
t
]
&
@
r
e
d
u
c
t
i
o
n
R
u
l
e
[
s
p
i
c
y
P
r
o
g
r
a
m
,
5
]
O
u
t
[
]
=
!
(
b
〈
x
〉
+
e
〈
x
〉
.
0
)
|
!
(
b
(
x
)
.
c
〈
x
〉
.
0
)
|
!
(
c
(
x
)
.
d
〈
x
〉
+
f
2
〈
x
〉
.
0
)
|
!
(
e
(
x
)
.
f
1
〈
x
〉
.
0
)
|
!
(
f
1
(
x
)
+
f
2
(
x
)
.
g
〈
x
〉
.
0
)
|
!
(
d
(
x
)
.
h
1
〈
x
〉
.
0
)
|
!
(
g
(
x
)
.
h
2
〈
x
〉
.
0
)
|
!
(
h
1
(
x
)
+
h
2
(
x
)
.
i
〈
x
〉
.
0
)
|
d
(
x
)
.
h
1
〈
x
〉
.
0
|
d
(
x
)
.
h
1
〈
x
〉
.
0
|
f
1
(
x
)
+
f
2
(
x
)
.
g
〈
x
〉
.
0
|
f
1
(
x
)
+
f
2
(
x
)
.
g
〈
x
〉
.
0
|
f
1
(
x
)
+
f
2
(
x
)
.
g
〈
x
〉
.
0
|
While our first two programs appear to have contracted in size, eventually reaching some smaller irreducible state, our third, non-terminating program, seems to have expanded in size. This is, obviously, due to the fact that it includes a replication operator (multiple of them, actually), while our simpler programs do not. We can even animate the evolution of our programs as they're being evaluated to see these two opposite trends in action.
Convert custom
π
-calculus syntax to canonical syntax after each reduction step
I
n
[
]
:
=
v
i
s
u
a
l
i
z
e
E
v
o
l
u
t
i
o
n
[
p
r
o
c
e
s
s
e
s
_
,
s
t
e
p
C
o
u
n
t
_
:
0
]
:
=
M
o
d
u
l
e
[
{
r
a
n
d
o
m
C
o
l
o
r
}
,
r
a
n
d
o
m
C
o
l
o
r
=
R
a
n
d
o
m
C
h
o
i
c
e
@
l
i
g
h
t
C
o
l
o
r
L
i
s
t
;
(
p
r
o
c
e
s
s
e
s
P
r
e
t
t
y
P
r
i
n
t
[
#
,
r
a
n
d
o
m
C
o
l
o
r
]
&
/
@
r
e
d
u
c
t
i
o
n
R
u
l
e
[
p
r
o
c
e
s
s
e
s
,
s
t
e
p
C
o
u
n
t
]
)
]
I
n
[
]
:
=
d
y
n
a
m
i
c
a
l
l
y
V
i
s
u
a
l
i
z
e
E
v
o
l
u
t
i
o
n
[
p
r
o
c
e
s
s
e
s
_
,
s
t
e
p
C
o
u
n
t
_
:
0
]
:
=
F
o
l
d
L
i
s
t
[
R
o
w
[
{
#
1
,
#
2
}
,
"
⟶
"
]
&
,
v
i
s
u
a
l
i
z
e
E
v
o
l
u
t
i
o
n
[
p
r
o
c
e
s
s
e
s
,
s
t
e
p
C
o
u
n
t
]
]
/
/
L
i
s
t
A
n
i
m
a
t
e
[
#
,
A
l
i
g
n
m
e
n
t
{
L
e
f
t
,
C
e
n
t
e
r
}
]
&
Visualize the contracting evolution of our first two program in the canonical syntax
I
n
[
]
:
=
d
y
n
a
m
i
c
a
l
l
y
V
i
s
u
a
l
i
z
e
E
v
o
l
u
t
i
o
n
/
@
s
i
m
p
l
e
P
r
o
g
r
a
m
s
/
/
C
o
l
u
m
n
O
u
t
[
]
=
x
〈
z
〉
.
0
|
x
(
y
)
.
y
〈
x
〉
.
x
(
y
)
.
0
|
z
(
v
)
.
v
〈
v
〉
.
0
|
⟶
z
〈
x
〉
.
x
(
z
)
.
0
|
z
(
v
)
.
v
〈
v
〉
.
0
|
⟶
x
(
z
)
.
0
|
x
〈
x
〉
.
0
|
b
〈
x
〉
+
e
〈
x
〉
.
0
|
b
(
x
)
.
c
〈
x
〉
.
0
|
c
(
x
)
.
d
〈
x
〉
+
f
2
〈
x
〉
.
0
|
e
(
x
)
.
f
1
〈
x
〉
.
0
|
f
1
(
x
)
+
f
2
(
x
)
.
g
〈
x
〉
.
0
|
d
(
x
)
.
h
1
〈
x
〉
.
0
|
g
(
x
)
.
h
2
〈
x
〉
.
0
|
h
1
(
x
)
+
h
2
(
x
)
.
i
〈
x
〉
.
0
|
Visualize the expanding evolution of our third, non-terminating program in the canonical syntax
d
y
n
a
m
i
c
a
l
l
y
V
i
s
u
a
l
i
z
e
E
v
o
l
u
t
i
o
n
[
s
p
i
c
y
P
r
o
g
r
a
m
,
4
]
O
u
t
[
]
=
!
(
b
〈
x
〉
+
e
〈
x
〉
.
0
)
|
!
(
b
(
x
)
.
c
〈
x
〉
.
0
)
|
!
(
c
(
x
)
.
d
〈
x
〉
+
f
2
〈
x
〉
.
0
)
|
!
(
e
(
x
)
.
f
1
〈
x
〉
.
0
)
|
!
(
f
1
(
x
)
+
f
2
(
x
)
.
g
〈
x
〉
.
0
)
|
!
(
d
(
x
)
.
h
1
〈
x
〉
.
0
)
|
!
(
g
(
x
)
.
h
2
〈
x
〉
.
0
)
|
!
(
h
1
(
x
)
+
h
2
(
x
)
.
i
〈
x
〉
.
0
)
|
So this is what's really happening underneath the hood of our interpreter! But while these visualizations are definitely a step up from before, they're still missing a dimension to the reduction process. Each of these reductions implicitly carries with it a choice made by the interpreter – a choice to evaluate one process instead of the other. Therefore the outcome we're seeing here may not be the only possible outcome (that's the nondeterminism of the
π
-calculus in action)! How can we tell if there are multiple possible outcomes for a given
π
-calculus program, and if possible, visualize them all at once?
Visualizing the Multiway System of Reductions
Enter the idea of a Multiway System. We can express these systems as graphs, where each node in the graph represents our system's state, and each edge represents a possible transformation from the current state to a future state. Any path within this graph represents a possible way our system could evolve over time, although which one it chooses to take, is , ultimately, random. To generate such a graph we can't just evaluate our program for a single ordering of its constituent processes, but for every possible ordering of them!
Generate all possible evaluation outcomes for a given
π
-calculus program and turn them into the connections of a graph
I
n
[
]
:
=
a
l
l
P
o
s
s
i
b
l
e
[
p
r
o
c
e
s
s
e
s
_
]
:
=
u
p
d
a
t
e
S
t
e
p
[
p
r
o
c
e
s
s
e
s
,
#
,
T
r
u
e
,
T
r
u
e
]
&
/
@
(
a
c
t
i
v
e
P
r
o
c
e
s
s
e
s
@
p
r
o
c
e
s
s
e
s
)
I
n
[
]
:
=
h
e
l
p
e
r
1
[
p
r
o
c
e
s
s
e
s
_
]
:
=
(
(
a
l
l
P
o
s
s
i
b
l
e
/
@
#
[
[
1
]
]
[
[
2
]
]
)
&
/
@
p
r
o
c
e
s
s
e
s
)
/
/
F
l
a
t
t
e
n
[
#
,
2
]
&
h
e
l
p
e
r
2
[
p
r
o
c
e
s
s
e
s
_
]
:
=
(
p
r
o
c
e
s
s
e
s
[
[
1
]
]
[
[
1
]
]
-
>
#
)
-
>
P
l
a
c
e
d
[
p
r
o
c
e
s
s
e
s
[
[
2
]
]
[
[
1
]
]
,
3
/
4
]
&
/
@
p
r
o
c
e
s
s
e
s
[
[
1
]
]
[
[
2
]
]
h
e
l
p
e
r
3
[
p
r
o
c
e
s
s
e
s
_
]
:
=
h
e
l
p
e
r
2
[
#
]
&
/
@
p
r
o
c
e
s
s
e
s
g
e
n
e
r
a
t
e
C
o
n
n
e
c
t
i
o
n
s
[
p
r
o
c
e
s
s
e
s
_
,
s
t
e
p
C
o
u
n
t
_
:
-
1
]
:
=
I
f
[
s
t
e
p
C
o
u
n
t
=
=
-
1
,
(
h
e
l
p
e
r
3
/
@
N
e
s
t
W
h
i
l
e
L
i
s
t
[
D
e
l
e
t
e
D
u
p
l
i
c
a
t
e
s
@
*
h
e
l
p
e
r
1
,
a
l
l
P
o
s
s
i
b
l
e
@
p
r
o
c
e
s
s
e
s
,
#
1
!
=
#
2
&
,
2
]
)
,
(
h
e
l
p
e
r
3
/
@
N
e
s
t
L
i
s
t
[
D
e
l
e
t
e
D
u
p
l
i
c
a
t
e
s
@
*
h
e
l
p
e
r
1
,
a
l
l
P
o
s
s
i
b
l
e
@
p
r
o
c
e
s
s
e
s
,
s
t
e
p
C
o
u
n
t
]
)
]
/
/
F
l
a
t
t
e
n
[
#
,
2
]
&
Graph these connections, and build up the Multiway System
I
n
[
]
:
=
g
r
a
p
h
M
u
l
t
i
w
a
y
S
y
s
t
e
m
[
c
o
n
n
e
c
t
i
o
n
s
_
,
s
m
a
l
l
_
:
T
r
u
e
]
:
=
M
o
d
u
l
e
[
{
n
e
w
C
o
n
n
e
c
t
i
o
n
s
}
,
n
e
w
C
o
n
n
e
c
t
i
o
n
s
=
(
#
/
.
(
(
a
_
-
>
b
_
)
-
>
c
_
)
:
>
(
(
C
o
l
u
m
n
@
(
p
r
o
c
e
s
s
P
r
e
t
t
y
P
r
i
n
t
/
@
(
a
)
)
-
>
C
o
l
u
m
n
@
(
p
r
o
c
e
s
s
P
r
e
t
t
y
P
r
i
n
t
/
@
(
b
)
)
)
-
>
c
)
&
/
@
c
o
n
n
e
c
t
i
o
n
s
)
;
G
r
a
p
h
[
(
(
#
[
[
1
]
]
)
&
/
@
(
n
e
w
C
o
n
n
e
c
t
i
o
n
s
)
)
/
/
D
e
l
e
t
e
D
u
p
l
i
c
a
t
e
s
,
V
e
r
t
e
x
L
a
b
e
l
s
I
f
[
s
m
a
l
l
,
P
l
a
c
e
d
[
"
N
a
m
e
"
,
T
o
o
l
t
i
p
]
,
N
o
n
e
]
,
E
d
g
e
L
a
b
e
l
s
I
f
[
s
m
a
l
l
,
n
e
w
C
o
n
n
e
c
t
i
o
n
s
,
N
o
n
e
]
,
E
d
g
e
L
a
b
e
l
S
t
y
l
e
I
f
[
s
m
a
l
l
,
D
i
r
e
c
t
i
v
e
[
B
l
a
c
k
,
B
o
l
d
,
F
o
n
t
S
i
z
e
S
c
a
l
e
d
[
0
.
0
0
3
2
]
,
F
o
n
t
F
a
m
i
l
y
"
C
o
u
r
i
e
r
"
,
B
a
c
k
g
r
o
u
n
d
R
a
n
d
o
m
C
h
o
i
c
e
[
l
i
g
h
t
C
o
l
o
r
L
i
s
t
]
]
,
N
o
n
e
]
]
]
I
n
[
]
:
=
g
e
n
e
r
a
t
e
M
u
l
t
i
w
a
y
S
y
s
t
e
m
[
p
r
o
c
e
s
s
e
s
_
,
s
t
e
p
C
o
u
n
t
_
:
-
1
,
s
m
a
l
l
_
:
T
r
u
e
]
:
=
p
r
o
c
e
s
s
e
s
/
/
g
e
n
e
r
a
t
e
C
o
n
n
e
c
t
i
o
n
s
[
#
,
s
t
e
p
C
o
u
n
t
]
&
/
/
g
r
a
p
h
M
u
l
t
i
w
a
y
S
y
s
t
e
m
[
#
,
s
m
a
l
l
]
&
I
n
[
]
:
=
g
e
n
e
r
a
t
e
M
u
l
t
i
w
a
y
S
y
s
t
e
m
@
m
i
l
d
P
r
o
g
r
a
m
O
u
t
[
]
=
Woah! Now we can finally see what's really going on. This diagram shows us that our first program is actually deterministic. It's Multiway System is just a linear chain. It can only ever evolve in one way. But what about our second program?
I
n
[
]
:
=
g
e
n
e
r
a
t
e
M
u
l
t
i
w
a
y
S
y
s
t
e
m
@
m
e
d
i
u
m
P
r
o
g
r
a
m
O
u
t
[
]
=
As we might expect, its a good deal more complicated! Its Multiway System has three distinct paths, each of which leads to a distinct end result (a distinct program state!). In particular, each split in the graph represents the existence of a Choice operation (although it's important to note that a
π
-calculus program can still be non-deterministic even without a Choice operator, for example, if there are multiple processes that start with a Send or Replicate command).
=Just as before, we can visualize how these graphs are built up over time with a little animation.
I
n
[
]
:
=
d
y
n
a
m
i
c
a
l
l
y
G
e
n
e
r
a
t
e
M
u
l
t
i
w
a
y
S
y
s
t
e
m
[
p
r
o
c
e
s
s
e
s
_
,
s
t
e
p
C
o
u
n
t
_
:
0
,
s
m
a
l
l
_
:
T
r
u
e
]
:
=
g
r
a
p
h
M
u
l
t
i
w
a
y
S
y
s
t
e
m
[
#
,
s
m
a
l
l
]
&
/
@
T
a
b
l
e
[
g
e
n
e
r
a
t
e
C
o
n
n
e
c
t
i
o
n
s
[
p
r
o
c
e
s
s
e
s
,
i
]
,
{
i
,
0
,
s
t
e
p
C
o
u
n
t
}
]
/
/
L
i
s
t
A
n
i
m
a
t
e
[
#
,
A
l
i
g
n
m
e
n
t
C
e
n
t
e
r
]
&
I
n
[
]
:
=
d
y
n
a
m
i
c
a
l
l
y
G
e
n
e
r
a
t
e
M
u
l
t
i
w
a
y
S
y
s
t
e
m
[
m
e
d
i
u
m
P
r
o
g
r
a
m
,
5
,
T
r
u
e
]
O
u
t
[
]
=
How about our final program though? The one that includes Replication operators. What do you think it might look like? Well, let's try and animate the evolution of its Multiway Graph for three time-steps (remember, we can't render the full graph, because the program never terminates).
I
n
[
]
:
=
d
y
n
a
m
i
c
a
l
l
y
G
e
n
e
r
a
t
e
M
u
l
t
i
w
a
y
S
y
s
t
e
m
[
s
p
i
c
y
P
r
o
g
r
a
m
,
3
,
F
a
l
s
e
]
O
u
t
[
]
=
W
o
w
!
I
t
a
l
m
o
s
t
l
o
o
k
s
l
i
k
e
a
s
n
o
w
f
l
a
k
e
.
T
h
a
t
'
s
a
n
i
n
t
e
r
e
s
t
i
n
g
t
h
i
n
g
t
o
n
o
t
e
a
b
o
u
t
π
-
c
a
l
c
u
l
u
s
p
r
o
g
r
a
m
s
t
h
a
t
h
a
v
e
a
l
l
t
h
e
i
r
p
r
o
c
e
s
s
e
s
w
r
a
p
p
e
d
i
n
R
e
p
l
i
c
a
t
i
o
n
c
o
m
m
a
n
d
s
.
T
h
e
y
o
f
t
e
n
t
e
n
d
t
o
v
e
r
y
c
l
o
s
e
l
y
m
i
r
r
o
r
s
o
m
e
s
y
m
m
e
t
r
i
c
a
l
s
h
a
p
e
.
T
h
a
t
'
s
b
e
c
a
u
s
e
e
a
c
h
p
r
o
c
e
s
s
c
a
n
a
l
w
a
y
s
s
p
l
i
t
o
u
t
a
n
d
R
e
p
l
i
c
a
t
e
i
t
s
e
l
f
.
I
f
t
h
i
s
w
a
s
t
h
e
o
n
l
y
t
h
i
n
g
t
h
e
y
c
o
u
l
d
d
o
,
t
h
e
n
t
h
e
g
r
a
p
h
w
o
u
l
d
b
e
p
e
r
f
e
c
t
l
y
s
y
m
m
e
t
r
i
c
a
l
,
b
u
t
s
o
m
e
t
i
m
e
s
i
n
s
t
e
a
d
o
f
r
e
p
l
i
c
a
t
i
n
g
,
o
u
r
i
n
t
e
r
p
r
e
t
e
r
m
i
g
h
t
c
h
o
o
s
e
t
o
e
v
a
l
u
a
t
e
o
n
e
o
f
t
h
e
p
r
o
c
e
s
s
e
s
,
w
h
i
c
h
i
n
t
r
o
d
u
c
e
s
a
f
e
w
i
r
r
e
g
u
l
a
r
i
t
i
e
s
i
n
t
o
t
h
e
i
r
o
v
e
r
a
l
l
s
h
a
p
e
.
W
e
c
a
n
a
l
s
o
n
o
t
i
c
e
s
o
m
e
o
t
h
e
r
i
n
t
e
r
e
s
t
i
n
g
m
o
t
i
f
s
w
i
t
h
i
n
t
h
e
s
e
M
u
l
t
i
w
a
y
G
r
a
p
h
s
.
F
o
r
i
n
s
t
a
n
c
e
,
c
o
n
s
i
d
e
r
t
h
e
t
o
p
o
l
o
g
y
o
f
t
h
e
s
e
c
o
n
d
s
t
a
g
e
g
r
a
p
h
o
f
o
u
r
p
r
o
g
r
a
m
a
b
o
v
e
.
I
t
c
o
n
t
a
i
n
s
a
l
o
o
p
:
a
p
l
a
c
e
w
h
e
r
e
o
u
r
π
-
c
a
l
c
u
l
u
s
p
r
o
g
r
a
m
c
a
n
a
c
t
u
a
l
l
y
r
e
v
e
r
t
b
a
c
k
t
o
a
p
r
e
v
i
o
u
s
s
t
a
t
e
!
W
h
a
t
'
s
h
a
p
p
e
n
i
n
g
h
e
r
e
?
Well, if we turn on the labeling feature for our graph’s edges (the result is pretty ugly since everything is packed together so tightly, so I won’t show it here) we’ll see that the operation carrying the program state away is a replication operation, and the operation carrying it back to its initial state is an evaluation of the operation that was replicated (in this case, a Send). Since there are no receivers (all of them are still waiting to be replicated), evaluating the Send equates to doing nothing, and returns the program to its initial state. While they’re more difficult to pick out, we can notice these same motifs in the third- and fourth-stage graphs too!
"Mining the Computational Universe" of Simple
π
-calculus Programs
So we discovered some pretty cool structures in the Multiway Graphs of our three programs! But our sample size was severely limited. To try and get a better sense of the diversity of Multiway Graph topologies that
π
-calculus programs can generate, we can try and "mine the computational universe" of
π
-calculus programs by randomly generating programs within certain parameters.
G
e
n
e
r
a
t
i
n
g
a
R
a
n
d
o
m
π
-
c
a
l
c
u
l
u
s
E
x
p
r
e
s
s
i
o
n
Experiments Without Replication
Let's start out simple, and just try to generate 2x2
π
-calculus programs. For simplicity, we won't include any replication operators for right now. Using our syntax-conversion scheme, we can visualize the evolution of 10 random 2x2 programs in the canonical
π
-calculus syntax.
Visualize the evolution of 10 2x2 programs
I
n
[
]
:
=
r
a
n
d
o
m
l
y
V
i
s
u
a
l
i
z
e
E
v
o
l
u
t
i
o
n
/
@
T
a
b
l
e
[
r
a
n
d
o
m
l
y
G
e
n
e
r
a
t
e
P
r
o
c
e
s
s
e
s
[
2
,
2
]
,
{
n
,
1
0
}
]
O
u
t
[
]
=
b
〈
c
〉
.
b
(
b
)
.
0
|
c
(
a
)
.
b
(
c
)
.
0
|
,
c
(
a
)
.
b
(
c
)
.
0
|
,
b
〈
a
〉
.
c
(
c
)
.
0
|
a
〈
c
〉
.
c
(
c
)
+
c
(
b
)
.
0
|
,
b
〈
a
〉
.
c
(
c
)
.
0
|
c
(
c
)
+
c
(
b
)
.
0
|
,
c
(
c
)
.
0
|
c
(
c
)
+
c
(
b
)
.
0
|
,
c
〈
a
〉
.
a
〈
b
〉
.
0
|
c
〈
b
〉
+
c
(
a
)
.
a
(
b
)
.
0
|
,
a
〈
b
〉
.
0
|
a
(
b
)
.
0
|
,
b
〈
c
〉
.
a
(
c
)
.
0
|
a
(
c
)
.
c
〈
c
〉
.
0
|
,
a
(
c
)
.
0
|
a
(
c
)
.
c
〈
c
〉
.
0
|
,
b
〈
a
〉
.
c
〈
a
〉
.
0
|
a
(
c
)
.
a
(
a
)
.
0
|
,
c
〈
a
〉
.
0
|
a
(
c
)
.
a
(
a
)
.
0
|
,
a
(
c
)
.
a
(
a
)
.
0
|
,
c
〈
c
〉
.
b
〈
a
〉
.
0
|
c
(
b
)
.
a
〈
c
〉
.
0
|
,
b
〈
a
〉
.
0
|
a
〈
c
〉
.
0
|
,
a
〈
c
〉
.
0
|
,
c
〈
b
〉
.
a
(
a
)
+
b
〈
a
〉
.
0
|
b
(
b
)
.
c
〈
b
〉
.
0
|
,
a
(
a
)
+
b
〈
a
〉
.
0
|
b
(
b
)
.
c
〈
b
〉
.
0
|
,
c
〈
a
〉
.
0
|
,
c
〈
a
〉
.
b
(
c
)
+
a
(
a
)
.
0
|
a
(
a
)
.
b
〈
b
〉
.
0
|
,
b
(
c
)
+
a
(
a
)
.
0
|
a
(
a
)
.
b
〈
b
〉
.
0
|
,
a
〈
a
〉
.
b
〈
c
〉
.
0
|
a
(
a
)
+
b
〈
c
〉
.
a
(
b
)
+
b
(
b
)
.
0
|
,
a
〈
c
〉
.
c
(
a
)
.
0
|
b
(
c
)
.
c
(
a
)
.
0
|
,
c
(
a
)
.
0
|
b
(
c
)
.
c
(
a
)
.
0
|
Ok, so we have confirmed that we can indeed generate random
π
-calculus programs, and ones that have multiple evaluation steps! Let's see what the graphs of 50 2x2
π
-calculus programs look like.
Generate 50 2x2
π
-calculus programs and graph their Multiway Systems
m
a
k
e
E
x
p
e
r
i
m
e
n
t
[
2
,
2
,
F
a
l
s
e
,
5
0
]
O
u
t
[
]
=
There's a lot to take in here! First of all, there is a much richer variety of structure than in our three test programs. Some programs have Multiway Graphs that diverge at one point but later converge, while others exhibit some more exotic loops. One interesting thing to note is that most graphs have many more intermediate states than end states, and that its rare for (at least a 2x2) program to have more than three distinct end states. Here are some of my favorite Multiway Graphs for this class of programs:
O
u
t
[
]
=
Now we can do the same thing with 3x3 and 4x4
π
-calculus programs!
m
a
k
e
E
x
p
e
r
i
m
e
n
t
[
3
,
3
,
F
a
l
s
e
,
5
0
]
O
u
t
[
]
=
m
a
k
e
E
x
p
e
r
i
m
e
n
t
[
4
,
4
,
F
a
l
s
e
,
5
0
]
O
u
t
[
]
=
Wow, some of those got very big very quickly – so big sometimes that their graphs can't even be rendered! One interesting phenomena we might notice is that we often see really simple graphs right next to complex ones. This is because we are generating programs at random, and so sometimes the ones we generate will be completely useless, like an agent sending out information with no one to receive it. In general though, we get some very sophisticated structures. Here are some of my favorites:
O
u
t
[
]
=
Experiments With Replication
We can also try to visualize 5 steps of non-terminating programs. These programs grow even more steeply, so we can only experiment with 2x2 and 3x3 programs.
Visualize the evolution of 10 2x2 programs (with replication)
I
n
[
]
:
=
r
a
n
d
o
m
l
y
V
i
s
u
a
l
i
z
e
E
v
o
l
u
t
i
o
n
/
@
T
a
b
l
e
[
r
a
n
d
o
m
l
y
G
e
n
e
r
a
t
e
P
r
o
c
e
s
s
e
s
[
2
,
2
,
T
r
u
e
]
,
{
n
,
3
}
]
O
u
t
[
]
=
c
〈
a
〉
.
a
〈
b
〉
.
0
|
c
(
b
)
.
a
〈
c
〉
.
0
|
,
a
〈
b
〉
.
0
|
a
〈
c
〉
.
0
|
,
a
〈
c
〉
.
0
|
,
!
(
c
〈
c
〉
.
a
(
c
)
.
0
)
|
!
(
b
〈
b
〉
.
c
〈
c
〉
.
0
)
|
,