Message Boards Message Boards

[WSS22] Framework for self-referential data structures: ConsList & ConsTree

Posted 2 years ago

enter image description here

POSTED BY: Athina Kyriazi
6 Replies

Preventing infinite loops and ensuring that operations on self referential structures are meaningful and computationally effective, shows us how to create an infinite series of powers of two, generating sequences based on rules, and constructing infinite trees based on a pattern or rule. What is the algorithmic complexity of operations on the structures, are they in line with expectations for linked lists? There is a fine line between optimizing the code in terms of efficiency and runtime complexity. The practicality of operating on self referential expressions, and the applications of the data structures are the foundation of creativity. Our understanding of the potential and limitations of expressions that reference themselves and the ConsList and ConsTree, introduce the core question of whether or not we can effectively operate on self referential expressions. What are the "benefits" of constructing this framework? We are going to create some modified "versions" of the List and Tree structures that support operations on infinite, self referential objects.

SetAttributes[ConsList, HoldRest]
SetAttributes[ConsTree, HoldRest]
InfiniteSteps[head_, ID_, step_] := 
 ConsList[ID, InfiniteSteps[head, head @@ {ID, step}, step]]
Blockchain[initialBlock_] := NestConsList[BlockHash, initialBlock]
BlockHash[block_] := Hash[block, "SHA256"]
CoefficientOfRestitution[v_, e_] := -e*v
SimulateBounce[height_, e_] := 
 ConsList[height, 
  If[height > 0.01, 
   SimulateBounce[CoefficientOfRestitution[Sqrt[2*9.81*height], e], 
    e], "Rest"]]
PrerequisiteTree[course_] := 
 ConsTree[course, PrerequisiteTree /@ Prerequisites[course]]
Prerequisites[course_] := 
 Switch[course, "Calculus", {"Algebra", "Geometry"}, "Algebra", {}, 
  "Geometry", {}, _, {"Algebra"}]
DisplayTreeLevels[tree_ConsTree, n_] := RootTree[tree, n]
steps = InfiniteSteps[Plus, 1, 1];
chain = Blockchain["Genesis Block"];
bounceSimulation = SimulateBounce[10, 0.9] // ExpressionTree
educationTree = PrerequisiteTree["Calculus"];
DisplayTreeLevels[educationTree, 3]
DisplayConsList[educationTree, {1, 2, 3}] // ExpressionTree

Framework 1-1

Framework 20

Framework 21

Ah, the framework for self-referential data structures. The University of Patras, that's a ConsList and ConsTree. We just screech and squeal and we had had to explore Lisp, algorithmic complexity. The motivations behind this project, the concept of lazy evaluation and the Lisp-like nature of the structures created, the collaborative spirit of the Wolfram method for exploring new ideas and refining computational methods. That is how the complexity of operations on the new data structures match up with the expected behavior of linked lists. That is how the Fibonacci sequence illuminates the practicality of operating on self referential expressions and the applications of graph theory principles. The ConsList and ConsTree data structure initialization is well worth the wait.

Take[InputSequence[#1 + #2 &, {# - 1, # - 2} &, {1, 1}], 30]
fib = %[[30]]/%[[29]]

Graphics[
 Point[Table[
   Sqrt[n] {Cos[0.1 Pi n fib], Sin[0.1 Pi n fib]}, {n, 10000}]]]

Which of these expressions are worth creating a framework for? It took me decades to get the possibility of operating on self-referential expressions. Sure x = {x, x} and the kernel prevents infinite evaluation by applying Hold to the expression. @Athina Kyriazi as for the "usefulness" of self-referential expressions, are they meaningful if they can solve problems? Are these ConsList and ConsTree introduced as self-referential types so just we can find something akin to the infinite list, while ConsTree resembles an infinite tree structure? What does it mean to construct and manipulate these data structures, along with new function definitions to operate on them?

InputSequence1

It's really these self-referential expressions that keep us organized with all the ReferenceError: process is not defined. The challenges of interpreting expressions like x = {x, x}, which typically result in recursion depth exceeded errors..can be explained by applying the Hold attribute to such expressions, evaluation can be prevented, allowing for further manipulation.

How do we overcome issues related to infinite recursive calls? We introduce the ConsList and ConsTree data structures, which are designed to be self referential. We apply the HoldRest attribute to these constructs to control their evaluation. This allows for us to close the spaces like a bag of chips, these constructs - control their evaluation. And then we can expand these data structures to a specified extent, creating a form of lazy evaluation. For instance, infiniteOnes is defined as a ConsList that recursively refers to itself, simulating an infinite list of ones.

InputSequence2

It's like, here's a text from a web, continue this sentence the way that you would expect it to be continued on the web and the neural net will successfully make that continuation. The big surprise with ChatGPT for example, is that not only could you go a few words ahead, you could write an essay and have the whole thing stay coherent. How far back that context window has gotten bigger, the ability to remain coherent..it's still writing a word at the time looking at the past one could certainly imagine Quantum LLMs where you're looking at the paths and sort of preemptively picking through those paths, where the LLM is going rather than to say, I will continue where I'm at so far. Being able to get that feedback sort of hard computation than oh, it roughly looks like based on the statistics of the neural net. You make them bigger stronger, I know I've talked about this a lot of time but there are these thresholds kind of like 2012. Recognition of 5000 kinds of objects, that got solved in a Deep Learning idea. Properly modifying these operations in order to operate on these types is the Microsoft Excel of displaying the timeless principles of graph theory.

ConsListResized

Stable diffusion got, sort of solve the question of how you can do AI generated images. There's sort of a basic threshold you get past and things sort of start to work then there's a question of can you type it up. And often, these kinds of techniques in Machine Learning are kind of hard to type out. So traditional software engineering programming types of things are easy to type up..that's not something that someone gets to do in Machine Learning you get to train it there do some reinforcement learning here, you want to make this change and have this certain thing happen. The threshold is reached, the threshold is passed, you get to this certain amount of functionality, then it's sort of gradual improvement over time. The initialization, it's not just syntactic sugar. It's for the applications of Hold to the {x,x} expression.

Bounce[velocity_, 
  coefficientOfRestitution_] := -coefficientOfRestitution*velocity
SimulateBounces[initialVelocity_, coefficientOfRestitution_, 
  bounces_] := 
 NestList[Bounce[#, coefficientOfRestitution] &, initialVelocity, 
  bounces]
bounces = SimulateBounces[10, 0.9, 10]

Framework 23

Now, you have what you have the question is what kind of thresholds? I'm sure understanding Machine Learning..I know what happens in the world, things that happen mechanically in the world, things that happen to people and so on there are things that one would expect, there are different questions about what one gets to when one could understand language in certain kinds of ways. The very idea of language, language is a way of packaging thoughts in a way that is communicable to another mind, so to speak. The big idea of language is this compositionality idea, rather than just say it's a rock, it's a chair, little nouns like that can put together a sentence, get different pieces of a sentence and you can say a red chair, was next to a blue table. And you don't have to have a separate word for blue table or a separate word for that concept, you can compose together these two pieces together; that's sort of the key idea of language so to speak. And sort of anchored with the core words that define concepts, we have 30,000 of them that get used, all the things that we might say in the language. And so that idea of packaging ideas, packaging thoughts by composable elements anchored to concepts represented by words, that's the core idea of language.

Clear[myFibonacci];
myFibonacci[n_] := Module[{fib}, fib[1] = 1; fib[2] = 1;
  fib[x_] := fib[x] = fib[x - 1] + fib[x - 2];
  Array[fib, n]]
fibSeq = myFibonacci[10]
list1 = Fold[ConsList, 0, Range[10]];
list2 = Fold[ConsList, 0, Reverse[Range[10]]];
dotProductResult = DotProduct[list1, list2];
DisplayConsList[dotProductResult] // ExpressionTree

Framework 22

Framework 26

What is the suite of functions for expanding, mapping, partitioning, and selecting elements within these self-referential structures that we use to manipulate them? We can expand, map, partition, and select elements within these self-referential structures; ExpandConsList and DisplayConsList are introduced to visualize the growth of these structures up to a certain point. You've got to..appreciate the framework's ability to handle the Fibonacci sequence..we could recognize the potential for these structures to simplify complex data manipulation tasks significantly. What is the algorithmic complexity and efficiency of the operations defined within the framework? While linked lists typically offer certain computational advantages (like constant time for appending elements)..it's crucial to ensure that these benefits are preserved in the newly introduced data structures.

BlockchainNode[value_, next_] := <|"Value" -> value, "Next" -> next|>
CreateBlockchain[data_List] := 
 Fold[BlockchainNode, Last[data], Reverse[Most[data]]]
CoefficientOfRestitution[m1_, v1_, m2_, v2_] := 
 Abs[(v2 - v1)/(PreCollisionVelocity[m1, m2, v1, v2] - 
     PreCollisionVelocity[m1, m2, v2, v1])]
PreCollisionVelocity[m1_, m2_, v1_, v2_] := (m1 v1 + m2 v2)/(m1 + m2)
SquareNumbers[n_] := Array[#^2 &, n]
TriangularNumbers[n_] := Array[(#*(# + 1))/2 &, n]
FibonacciUnprotected[n_] := Module[{fib}, fib[1] = 1; fib[2] = 1;
  fib[x_] := fib[x] = fib[x - 1] + fib[x - 2];
  Array[fib, n]]
MapConsTree[func_, tree_ConsTree] := TreeMap[func, tree]
ConsList /: 
 DotProduct[ConsList[ID1_, next1_], ConsList[ID2_, next2_]] := 
 ID1*ID2 + DotProduct[next1, next2]
DotProduct[ConsList[ID1_, _], ConsList[ID2_, _]] := ID1*ID2
blockchain = CreateBlockchain[Range[5]]
e = CoefficientOfRestitution[5, 2, 3, -1]
squares = SquareNumbers[10]
triangles = TriangularNumbers[10]
fibSeq = FibonacciUnprotected[10]

Framework 27

While the current implementation may not be the most efficient in terms of runtime complexity, it lays the groundwork for future optimizations. Ideally, linked lists should operate in linear time for traversal operations. And that is why our data structure operates in non-linear time, because otherwise we would have to cache computed values to avoid redundant calculations in the future. The fundamental questions that we have about data structures, efficiency, and programming...it's a "common misrepresentation" of the Sieve of Eratosthenes algorithm that, in the context of lazy functional programming..it's the classic algorithm for finding all prime numbers up to a certain limit, and it has often been used to demonstrate how elegant this functionally is, what we're doing with this lazy evaluation in functional programming languages such as Haskell.

With[{infiniteOdds = ConsListRange[1, 2],
  infinitePowersOfTwo = InfiniteSteps[Times, 1, 2],
  firstTenIntegers = Take[ConsListRange[], 10],
  mappedIntegers = Map[#^2 &, ConsListRange[]],
  colors = {
    RGBColor[0, 0.4, 0.8],
    RGBColor[0.8, 0.2, 0.2],
    RGBColor[0.2, 0.6, 0.4],
    RGBColor[0.6, 0.4, 0.2]}},
 Show[
  MapThread[
   ListPlot[
     Take[#1, 10],
     Filling -> Axis,
     PlotTheme -> "Detailed",
     PlotStyle -> #2] &,
   {{firstTenIntegers,
     infiniteOdds,
     mappedIntegers,
     infinitePowersOfTwo},
    colors}
   ], PlotRange -> All]]
tree = NestConsTree[Function[x, {2*x, 3*x}], 4] 
displayedTree = RootTree[tree, 3] 

PlotTheme1

PlotTheme2

PlotTheme3

The "simplistic and elegant appearance of the code" belies its inefficiency, in the sense that the Sieve of Eratosthenes is, in fact, not the sieve but a form of the trial division algorithm, which "should only", show that for the traditional Sieve of Eratosthenes, for a given prime number p, multiples of p are marked as non-prime starting from p^2, since smaller multiples would have been marked by smaller primes. In contrast, the Haskell code often presented as the sieve is unfaithful to this method. The reason is that the Haskell code, what it actually does is it generates candidates for primes and filters out non-primes by checking divisibility against all previously found primes and that, is computationally more expensive. And that is how we got here @Athina Kyriazi, the kernel of Mathematica exists so that we can open another window - sure Mathematica isn't perfect, whatever that means but, it's..something else. Do you want O'Neill to provide a detailed analysis of Mathematica? It's full of convenience features.

With[{Sieve = Function[n,
    Module[{primes = Range[2, n], i = 1, p},
     While[(p = primes[[i]]) <= Sqrt[n],
      primes = DeleteCases[primes, x_ /; x != p && Mod[x, p] == 0];
      i++];
     primes]
    ],
  fibonacciNumbers = 
   Take[InputSequence[#1 + #2 &, {# - 1, # - 2} &, {1, 1}], 13]
  },
 Module[{primesUpTo200 = Sieve[200],
   primeFibonacciNumbers = Select[fibonacciNumbers, PrimeQ],
   min,
   max},
  min = Min[primesUpTo200];
  max = Max[primesUpTo200];
  ListPlot[primesUpTo200,
   PlotStyle -> Directive[PointSize[Medium], Black],
   PlotRange -> {{1, Length[primesUpTo200]}, {min, max + 70}},
   Frame -> True,
   FrameLabel -> {"Index", "Value"},
   FrameStyle -> GrayLevel[0.1],
   GridLines -> {None, primeFibonacciNumbers},
   GridLinesStyle -> 
    Directive[Thin, Dashed, RGBColor[0.4, 0.6, 0.8]],
   PlotLegends -> Placed[{"Prime Numbers Up to 200"}, {1, 0.8}],
   ImageSize -> 600,
   PlotTheme -> "Detailed"]
  ]]

Prime Numbers Up To 200

Will there be some different form of summarization of the world that's different than we imagine it? There's iconography there's user interfaces..those over the last few decades have come to be another level of a kind of communication to humans; there's certain kinds of things that the user interface can communicate and it's a little different, and a picture is worth a thousand words..although they don't have the same kind of compositionality that language seems to have, apparently. And we start to poke at communication and it gives a kind of sense of where one might see kinds of ways of packaging ideas, thoughts, that is very different from what we humans come up with, possibly, will the thing come up with be something that humans can learn?

ConsList /: KeyValue[ConsList[key_, value_], keys_List] := 
 If[MemberQ[keys, key], {key -> value}, KeyValue[value, keys]]
ledger = 
  Fold[ConsList, {}, {{"Alice", 50}, {"Bob", -50}, {"Alice", 
     25}, {"Bob", -25}}];
DisplayConsList[ledger] // ExpressionTree
KeyValue[ledger, {"Alice", "Bob"}]
bounce[height_, coeff_, limit_] := 
 NestWhileList[#*coeff &, height, # > limit &]
bounceHeights = bounce[10, 0.7, 0.1];
ListPlot[bounceHeights, Joined -> True, PlotMarkers -> Automatic, 
 AxesLabel -> {"Bounce", "Height"}]
ConsTree /: Achieve[ConsTree[degree_, reqs_], degreeWanted_] := 
 If[degree === degreeWanted, degree, 
  Achieve[#, degreeWanted] & /@ reqs]
educationTree = 
 ConsTree["PhD", {ConsTree["Masters", {ConsTree["Bachelor", {}]}]}]
Achieve[educationTree, "Masters"]

Framework 27

Because it's really about O'Neill, you know how the unfaithful sieve and the genuine sieve..the unfaithful sieve has a time complexity of O(n^2 / (log n)^2), which is significantly worse than the genuine sieve's O(n log log n)..and it's really the author who provides an implementation of the genuine sieve in Haskell that uses appropriate data structures to efficiently cross off multiples of primes, akin to the original sieve's process. And that should start to almost begin to explain how we reset the value of a variable. We do it without restarting the Evaluation kernel.

n = ConsListRange[];
binaryTree = NestConsTree[Function[x, {2*x, 3*x}], 4] ;
mapAt = MapAt[Function[x, 100], n, 2];
mapped = Map[Function[x, x^2], Take[n, 10]];
treeMapped = TreeMap[Function[x, 2*x], binaryTree, {0, 1, 2, 3}];
selected = Select[Take[n, 20], Function[x, EvenQ[x]]];
partitioned = Partition[Take[n, 10], 3];
oddNumbers = ConsListRange[1, 2];
evenNumbers = ConsListRange[2, 2];
riffled = Riffle[Take[oddNumbers, 5], Take[evenNumbers, 5]];
foldedList = FoldList[Plus, Take[n, 10]];
firstPosition = FirstPosition[n, 5];
Grid[{
  {"MapAt", "Mapped", "TreeMap"},
  {mapAt, mapped, treeMapped},
  {"Select Even", "Partition", "Odd Numbers"},
  {selected, partitioned, oddNumbers},
  {"Even Numbers", "Riffle", "FoldList"},
  {evenNumbers, riffled, foldedList},
  {"First Position", "Tree"},
  {firstPosition, tree}
  }, Frame -> All]
expressionTree = 
 DisplayConsList[binTree, {0, 1, 2, 3}] // ExpressionTree

MapAt

Performance in practice shows that while the unfaithful sieve is more straightforward to implement using lists in Haskell, it performs poorly in comparison to the genuine sieve, especially as the numbers involved grow larger. And that's why it's so important to choose the right data structure for the task at hand, which can make a significant difference in performance. This 2D infinity is so dreamy I don't know how else to put it, I can't believe this is happening, actually. It's O'Neill who goes further to introduce an improved implementation of the genuine sieve using a priority queue (or heap) instead of a list or tree to store the next composite numbers, which "increases" efficiency. She also discusses the use of a "wheel" optimization to skip multiples of small primes like 2, 3, 5, and 7, thereby reducing the number of candidates to be tested for primarily and improving performance even more.

DisplayConsListMap

Furthermore, in Mathematica there are all kinds of keyboard shortcuts to complete the kernel. The paper itself, which clarifies what happens when we quit the kernel, clarifies a long-standing confusion in the functional programming community, demonstrating that with careful consideration of algorithmic design and data structures, one can implement the Sieve of Eratosthenes efficiently in a lazy functional style. The work encourages functional programmers to explore beyond lists and to utilize more complex data structures when they are more suitable for the problem at hand.

nExternal = 10;
primeNumbers = Map[Log[#] &,
   Take[primes, 15]];
fibonacciNumbers = Map[Log[#] &,
   Take[fib, 30]];
nestedSequences = Map[Log[#] &,
   Take[Map[Take[#, nInternal] &,
     Map[NestConsList[f, #] &, ConsListRange[1, 1]]],
    nExternal],
   {2}];
squaredSequences = Map[Log[#] &,
   Take[Map[Take[#, nInternal] &,
     Map[NestConsList[#^2 &, #] &,
      ConsListRange[1, 1]]],
    nExternal],
   {2}];
Show[MapThread[
  ListPlot[#1,
    PlotMarkers -> Automatic,
    Frame -> True,
    Joined -> True,
    GridLines -> Automatic,
    GridLinesStyle -> Directive[Gray, Dashed],
    ImageSize -> Large,
    PlotStyle -> #2,
    PlotLegends -> {#3},
    PlotRange -> {Automatic,
      {Min[squaredSequences],
       Max[squaredSequences]}}
    ] &,
  {{primeNumbers,
    fibonacciNumbers,
    nestedSequences,
    squaredSequences},
   {RGBColor[0.6, 0.3, 0.5],
    RGBColor[0.2, 0.55, 0.4],
    RGBColor[0.9, 0.5, 0.5],
    RGBColor[0.5, 0.7, 0.9]},
   {"Prime Numbers",
    "Fibonacci Numbers",
    "Nested Sequences",
    "Squared Sequences"}}]]

Prime Numbers, Fibonacci Numbers, Nested Sequences, Squared Sequences

@Athina Kyriazi For the better, the potential to transform the world these prime numbers have at a scale that, can I imagine it? No. It's even not about how foliation has to not be with respect to time, or whether it can be some arbitrary other spatial parameter. This framework is for the linked list data structure: the "practice" of critically evaluating the performance and faithfulness of algorithms, especially when teaching or employing them as exemplars of programming paradigms, demonstrates the relevance of algorithmic design and the thoughtful use of data structures in the realm of functional programming where lists and one-liners are often (mis)presented as the de facto tools for all tasks.

BlockChain[head_, data_, step_] := 
 ConsList[data, BlockChain[head, Hash[head @@ {data, step}], step]]
simpleChain = BlockChain[Plus, "GenesisBlock", 0];
Print[ExpandConsList[simpleChain, 5] // ExpressionTree]
CollisionModel[m1_, m2_, v1i_, v2i_, e_] := 
 Module[{v1f, v2f}, 
  v1f = (e*m2*(v2i - v1i) + m1*v1i + m2*v2i)/(m1 + m2);
  v2f = (e*m1*(v1i - v2i) + m1*v1i + m2*v2i)/(m1 + m2);
  {v1f, v2f}]
EducationTree[course_] := 
 ConsTree[course, EducationTree /@ PrerequisiteCourses[course]]
PrerequisiteCourses[course_] := 
 Switch[course, "Calculus", {"Algebra", "Geometry"}, 
  "Algorithms", {"Data Structures", "Discrete Math"}, 
  "Machine Learning", {"Statistics", "Algorithms"}, _, {}]
mlPrerequisites = EducationTree["Machine Learning"];
DisplayConsList[mlPrerequisites, {0, 1, 2}] // ExpressionTree

Framework 30

Framework 21

And that's why it's so important to choose the right data structures and architecture to look beyond surface-level elegance and simplicity. The concept of a "wheel" to skip over numbers that are multiples of the first few primes..by using this technique, the implementation reduces the number of candidates and computations required, thereby optimizing the sieve process..in order to rectify the misinformation I uh, ought to provide an actual implementation of the Sieve of Eratosthenes by Haskell that utilizes more complex data structures, what with the original sieve and its method of eliminating non prime numbers and doing so with improved efficiency.

AppendBlock[ledger_ConsList, data_] := ConsList[data, ledger];
Blockchain := 
  Nest[AppendBlock[#, RandomInteger[100]] &, ConsList[GenesisBlock], 
   10];
Collision[k1_, m1_, u1_, k2_, m2_, u2_, e_] := 
  Module[{v1, v2}, v1 = (e*m2*(u2 - u1) + m1*u1 + m2*u2)/(m1 + m2);
   v2 = (e*m1*(u1 - u2) + m1*u1 + m2*u2)/(m1 + m2);
   {k1 -> v1, k2 -> v2}];
RecordAchievements[student_ConsList, achievement_] := 
  MapAt[Append[#, achievement] &, student, -1];
Framework[func_, initialData_, steps_] := 
  Nest[func, initialData, steps];
ledger = Blockchain;
studentAchievements = 
  ConsList["Student1", {"MathAward", "ScienceFairWinner"}];
updatedStudentAchievements = 
  RecordAchievements[studentAchievements, 
   "ProgrammingContestWinner"];
collisions = Collision["Ball1", 1, 0, "Ball2", 1, 10, 0.9];
ledger // ExpressionTree
updatedStudentAchievements // ExpressionTree
collisions // ExpressionTree

Framework 3

Framework 4

Framework 5

What is the "true" time complexity of both the genuine and unfaithful sieve? The genuine Sieve of Eratosthenes exhibits a linearithmic time "complexity" of O(n log log n), while the unfaithful sieve exhibits a quadratic time complexity relative to the logarithm of the number, O(n^2 / (log n)^2). The practical performance results we should do reproduce (in both theory and practice) the superiority of the genuine sieve. The Haskell code is inefficient, and that's why it's inefficient, because it's inefficient because it performs unnecessary divisibility checks. These divisibility checks are unnecessary, because it tests every number against all previously identified primes in all possibilities rather than strategically eliminating multiples from a list, which results in significantly slower execution on a scale of time complexity, especially only and unless we use larger lists of numbers.

SetAttributes[ConsList, HoldAllComplete];
SetAttributes[ConsTree, HoldAllComplete];
BlockChain[value_, nextBlock_] := ConsList[value, nextBlock];
CreateBlock[previousBlock_, transaction_] := 
  BlockChain[Hash[{previousBlock, transaction}], previousBlock];
GenerateBlockchain[n_Integer] := 
  Nest[CreateBlock[#, RandomChoice[{"Tx1", "Tx2", "Tx3"}]] &, 
   BlockChain[Hash["Genesis"], None], n];
blockchain = GenerateBlockchain[10];
VerifyBlockchain[BlockChain[_, None]] := True;
VerifyBlockchain[BlockChain[hash_, prevBlock_]] := 
  hash === Hash[{prevBlock, First@prevBlock}] && 
   VerifyBlockchain[prevBlock];
verified = VerifyBlockchain[blockchain]
CoefficientOfRestitution[m1_, v1_, m2_, v2_, e_] := 
  Solve[m1 v1 + m2 v2 == m1 v1' + m2 v2' && 
    e == (v2' - v1')/(v1 - v2), {v1', v2'}];
EducationTree[subject_, topics_] := ConsTree[subject, topics];
mathTopics = {"Algebra", "Calculus", "Geometry", "Statistics"};
mathTree = 
  EducationTree["Mathematics", EducationTree[#, {}] & /@ mathTopics];
DisplayEducationTree[ConsTree[subject_, topics_]] := 
  TreeForm[Tree[subject, TreeForm /@ topics]];
mathTreeDisplayed = DisplayEducationTree[mathTree]

Framework 6

Framework 7

And I guess that's the computational wheel that we'll find buried in the paper, the numbers that are multiples of the first few primes we can skip over with the wheel. By using this technique, the implementation reduces the number of candidates and computations required, thereby optimizing the sieve process. A widely circulated Haskell "one-liner" generates prime numbers but it is not the genuine Sieve of Eratosthenes. It is in fact an unfaithful representation that corresponds more closely to trial division. The beauty and simplicity is dismantled, in the sense that the code is efficient and faithful just not to the original algorithm devised by Eratosthenes. That's how we got all those quantum tutorials it really made me smile.

BlockChain[data_] := 
 Nest[ConsList[Hash[#], #] &, ConsList[Hash[data], data], 
  Length[data]]
data = {"Block 1", "Block 2", "Block 3"};
chain = BlockChain[data] // ExpressionTree

Framework 8

The commonly used Haskell code may very well be the amazing gemstone. It performs divisibility checks. It tests every number against primes previously identified. There's some kind of scaling, the actual neurons in the physical brain, the number of neurons in the brain and the number of distinct words that can be dealed with, how many words can an ant deal with it has 50,000 neurons, the ant can perhaps communicate. The question becomes, if we start to have neural nets that have 10 to the trillion neurons in them, if we take what we see from the world, this collection of different images, they will correspond to this concept. How many attractors, or is it the case that if our brain has a certain number of neurons and we want to pack more and more concepts, because the sort of attractor, the pattern of brain activity that we recognize as the concept of cat, the pattern of brain activity that we recognize as the concept of dog..they'll start to be overlapped and you'll start to not be able not to distinguish between those things.

BlockConsList[data_, nextBlock_ : None] := ConsList[data, nextBlock]
AddBlock[blockchain_, data_, hashFunction_] := 
 Module[{newBlock, lastBlockHash}, 
  lastBlockHash = hashFunction[Last[blockchain]];
  newBlock = BlockConsList[{data, lastBlockHash}];
  Append[blockchain, newBlock]]
EducationalTree[level_, subject_, next_ : None] := 
 ConsTree[{level, subject}, next]
BuildCurriculum[subject_, levels_] := 
 Fold[EducationalTree[#2, subject, #1] &, None, Reverse@levels]
CalculateCollisionOutcome[mass1_, velocity1_, mass2_, velocity2_, 
  restitutionCoefficient_] := {finalVelocity1, finalVelocity2} 
DisplayStructure[structure_] := 
 TreeForm[structure /. ConsList | ConsTree -> List]
blockchain = BlockConsList["Genesis Block"];
blockchain = AddBlock[blockchain, "Block 1", Hash];
blockchain = AddBlock[blockchain, "Block 2", Hash];
curriculumTree = 
  BuildCurriculum[
   "Mathematics", {"Undergraduate", "Graduate", "Postgraduate"}];
collisionOutcome = CalculateCollisionOutcome[2, 3, 5, -1, 0.8];
DisplayStructure[blockchain]
DisplayStructure[curriculumTree]
collisionOutcome

Blockchain 9

Undergraduate 10

Framework 11

I think that your framework for handling self referential data structures just shows how we perform operations on them, and then we can reflect on the operations that we are performing which are meant to illustrate how we create and manipulate infinite data structures in a functional programming style. That's how the arguments for the ConsList and the ConsTree symbols have been held unevaluated..what with the arguments that matter for creating self referential structures..we can prevent infinite recursion by resting. I think that we have a clear choice..we can define an infinite list, or stream that starts with init and then successively applies head to generate the next element using the previous element and step. Then, we can generate a self referential structure, using a seed list and a function func that determines how each new element is computed from previous ones. The recursive structure is built using Fold and ReplaceAll, which is symbolized by the /.. And then we can create an infinite list.

SetAttributes[ConsList, HoldRest];
SetAttributes[ConsTree, HoldRest];
InfiniteSteps[head_, init_, step_] := 
  ConsList[init, InfiniteSteps[head, head[init, step], step]];
GenerateSequence[func_, seed_List] := 
  Fold[ConsList[#2, #] &, ConsList[Last[seed], Null], 
    Reverse[Most[seed]]] /.
   Null -> (ConsList[func @@ Take[%, Length[seed]], #0] &);
fibonacciSeq = GenerateSequence[Plus, {0, 1}];
MapConsList[f_, ConsList[head_, tail_]] := 
  ConsList[f[head], MapConsList[f, tail]];
NestConsTree[f_, init_, depth_] := 
  ConsTree[init, 
   If[depth > 1, NestConsTree[f, #, depth - 1] & /@ f[init], {}]];
binaryTree = NestConsTree[{2 # &, 3 # &}, 1, 3];
MapConsTreeLevel[f_, tree_ConsTree, level_Integer] := 
  Map[f, TreeLevel[tree, level], {level}];
AppendTransaction[ledger_ConsList, transaction_] := 
  ConsList[transaction, ledger];
Bounce[height_, restitution_] := 
  NestWhileList[# restitution &, height, # > 0.01 &];
AddEducation[person_Association, degree_] := 
  Append[person, "Education" -> degree];
ledger = ConsList["Initial Balance"];
ledger = AppendTransaction[ledger, "Transaction 1"] // ExpressionTree
bounces = Bounce[10, 0.9];
person = <|"Name" -> "Alice"|>;
person = AddEducation[person, "PhD in Physics"]

Framework 12

Framework 13

We can represent the Fibonacci sequence using the GenerateSequence function. If we didn't need vectorization then we could just do iterative programming. But in Mathematica, we can actually do the "closed form" function application of a given function f to each element of a ConsList, effectively mapping f over the list. The NestConsTree function generates a self referential tree structure with a given depth, where f is a function (or list of functions, since we're doing vectorized) generation of the children of each node in the tree.

SimpleHash[data_, prevHash_] := Mod[Hash[data] + prevHash, 10^10]
CreateGenesisBlock[data_] := 
 BlockChain[SimpleHash[data, 1], data, "Genesis", Null]
AddBlock[chain_BlockChain, data_] := 
 BlockChain[SimpleHash[data, chain["Hash"]], data, chain, 
  chain["Prev"]]
BlockChain /: BlockChain[hash_, _, _, _]["Hash"] := hash
BlockChain /: BlockChain[_, _, prev_, _]["Prev"] := prev
genesisBlock = CreateGenesisBlock["Linked List Emulation"]
block2 = AddBlock[genesisBlock, "Lazy Infinite Evaluation"]
block3 = 
 AddBlock[block2, 
  "Make a framework for self referential data structures"]
block4 = 
 AddBlock[block3, 
  "Check the algorithmic complexity of various operations"]
ValidateChain[BlockChain[hash_, data_, "Genesis", _]] := True
ValidateChain[block_BlockChain] := 
 block["Hash"] == 
   SimpleHash[block["Prev"]["Data"], block["Prev"]["Hash"]] && 
  ValidateChain[block["Prev"]]
ValidateChain[block4]

Framework 14

There are a bunch of different words that a person might not know but which are useful for communicating medical concepts and that somehow leverage, go further with language than we could go with everyday words. And when we have a million word AI what it will sort of feel like to be able to deal with such a question we don't know, that was my thought with what sort of level we can understand a language - is there a form of language perhaps more efficient than language achievable by a thing that can keep more concepts of mind? It would be very hard to imagine what it would be like if one had language, and if one didn't have language.

BlockChain /: 
 BlockChain[hash_, data_, prev_, restitution_][
  "Restitution"] := restitution
ComputeRestitution[block1_BlockChain, block2_BlockChain] := 
 RandomReal[{0, 1}]
block3 = 
 AddBlock[block2, 
  "Made a framework for self referential data structures", 
  ComputeRestitution[block2, block3]]
block4 = 
 AddBlock[block3, 
  "Checked the algorithmic complexity of various operations", 
  ComputeRestitution[block3, block4]]
block3["Restitution"]
block4["Restitution"]

Framework 15

Our computational language, Wolfram Language which didn't exist and does exist, we can begin to see how one can begin to clarify such things in its full form from the infinite data structures, the existence of computational language helps one think thoughts in a different way, compose thoughts in a different way, what it might feel like to deal with an AI that knows a million different words opens up possibilities for working with complex recursive structures and let's see, it emphasizes the balance between elegant code and practical runtime complexity, because what we're doing is we're using a lot of compression representing these infinite lists and trees in a manner that is computationally feasible. That's really a story of compression, a story of taking every pixel arrangement and just saying it's a camera!

Transaction[id_Integer, from_String, to_String, amount_Real] := <|
   "ID" -> id, "From" -> from, "To" -> to, "Amount" -> amount|>;
CreateLedger[transactions_List] := 
  Fold[ConsList, "Genesis", transactions];
ledger = 
 CreateLedger[
   Transaction[#, "Alice", "Bob", 100.0*#] & /@ Range[10]] // 
  ExpressionTree
VerifyLedgerIntegrity[ConsList["Genesis", _]] := True
VerifyLedgerIntegrity[ConsList[Transaction[id_, _, _, _], next_]] := 
 id == 1 || (id - 1 == next[[1]]["ID"] && VerifyLedgerIntegrity[next])

Framework 16

Something where you've concentrated that into a thing that represents a sort of coherence that can be proved, like traversing or joining lists in constant or linear time, not quadratic or worse. The thing to understand about words, the precision of meaning, the whole point is the idea of compression. I could say it's a cameraloglalala and manipulate these structures with extended list-framework functions and new framework function definitions. Data structures with infinite recursive definitions can lead to errors due to exceeding recursion limits it's super scary when you see how self referential expressions can be operated upon meaningfully within computational frameworks. Things can happen every nanosecond. The fundamental hardware of our brains..are inspired by the functional programming concept of lazy evaluation, akin to constructs in Lisp.

POSTED BY: Dean Gladish

enter image description here -- you have earned Featured Contributor Badge enter image description here Your exceptional post has been selected for our editorial column Staff Picks http://wolfr.am/StaffPicks and Your Profile is now distinguished by a Featured Contributor Badge and is displayed on the Featured Contributor Board. Thank you!

POSTED BY: EDITORIAL BOARD

That clarifies the project goals and means.

As for complexity, my interest is far more run time than source code length. The idea is this. Say you have a (linked) list of length n, for n large. Consider the task: Traverse it, do some constant time operation on each element, perhaps replace the element with the result of that computation. You would want this to scale as O(n). If it scales say as O(n^2) then there is a problem.

Similarly, suppose you want to join ("Cons") a pair of such lists. This should be constant time; a good implementation will retain pointers to both first and last elements in each pair, and it is then constant time to hook them and adjust the last pointer of the initial list. This assumes you want these structures to be mutable, rather than to make new copies whenever there is a change. Which I believe is a plausible assumption for a linked list structure at least in the setting you describe. Also it assumes there is always an "end" element.

In your case you might want to allow the situation where the last element points to the first (or any other in the list, even to itself). This is to emulate infinite lists for example. This goes a bit outside of lists/trees and into graph structures. This can be finessed with linked lists but the bookkeeping will be trickier. But this is somewhat outside the scope of my original questions.

POSTED BY: Daniel Lichtblau
Posted 2 years ago

Motivated by the first paragraph, I searched more information in order to understand terms like constant time operation and scale of algorithmic complexity. I also understood that giving a type the attribute mutable depends on how one implements the functions that operate on it (that’s what I understood from the “join” example - I don’t know if I understood correctly - and I also don’t know if other factors contribute in making a type mutable). While implementing the functions I didn’t know that the targeted algorithmic scale is O(n) for linked lists. I think that functions like take, select, (map?) have that algorithmic scale.

-finite set of ordered “Cons” elements, in which the last one points to another-earlier placed one in the finite “Cons” chain ( I believe this is what you explained in the last paragraph): this loop is something that isn’t supported ( to be created automatically by the above code / such constructor function doesn’t exist)

1st thought: Not correct ConsList[A, ConsList[B, ConsList[C, next]]

2nd thought: what I want to achieve in C code

//list of integers
typedef struct node
{
    int d;
    struct node*next ;
} Node ;

typedef Node * Node_ptr;
typedef Node * LinkedList;

int main(){
    Node first;
    Node second;
    Node third;

    //creating an actual loop
    first.d = 1;
    first.next= &second;
    second.d = 2;
    second.next = &third;
    third.d = 3;
    third.next = &first;

};

3rd thought: Wolfram Code

(*shorter code*)
x=10 (*number of loops*)
Nest[ReplaceAll[next :> ConsList[A, ConsList[B, ConsList[C, next]]]], 
 ConsList[A, ConsList[B, ConsList[C, next]]], x]
(*new memory (I think) is always being captured by looping in order to save the extended symbolic expression*)
(*I had this code but I didn’t think of using it because I am using replacement and not actual pointers - without ReplaceAll and Nest the expression ConsList[A, ConsList[B, ConsList[C, next]]]  (same with thought 1) isn’t actually pointing to itself*)

-infinite set of ordered “Cons” where the next IDs depend on the previous ones: I don’t know how to define the “end” element in order to implement such a “join” function. But I thought that maybe the code could be improved according to the following idea: when expanding an infinite “Cons” (finding the IDs until a certain extend) the found IDs can be saved so that they won’t have to be re-calculated in every operation until this extend + further extend would be based on the previous one. The above implementation recalculates the values when operating on them. This has both advantages and disadvantages for sure.

Thank you for your detailed answer and sorry for the late response. It turns out that I needed a little bit of time in order to think about my answer. Maybe my answer ended up being too long.

POSTED BY: Athina Kyriazi

I gather the main goal is to simulate lazy infinite evaluation. But this does look rather Lisp-like, with a linked list emulation and related functionality. If that is the intent, have you checked that the algorithmic complexity of various operations is as expected for linked lists?

POSTED BY: Daniel Lichtblau
Posted 2 years ago

The initial project title was: Make a framework for self-referential data structures. In order to build on top of the existing Wolfram built-in framework, my mentors and I decided to turn lists and trees into infinite self-referential data structures, which we handled using the logic of lazy evaluation (expanding each defined type until the needed extend). Turning a list into an infinite self-referential data structure results a type which is similar to linked lists. The prefix "Cons", which references to Lisp, was chosen in order to emphasise the self-referential attribute of the extended version of lists and trees. Regarding the algorithmic complexity this was a first attempt to alter the existing built-in functions and I don't doubt that this can be done in more efficient ways regarding the number of code lines and execution time (most possibly yes). I don't know if I answered your question. I hope I did.

POSTED BY: Athina Kyriazi
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract