So far we have already built tensors from our WordUsage bitfields. Next we have directed graphs for our RegularLanguage Grammar structures. The difference between neural networks and our directed graphs is that neural network edges also carry a weight. To derive weights in our Neural Network, instead of training them we will hard code them. Each edge E from a node N gets a weight equal to 1/|E(N)|. This simple formula turns our graph into a fully functional Neural Network!
First, before we get too ahead of ourselves, we'll need some persistence for our data structures. For now we can just store everything in human readable text files. To start let us implement a load and save function for word usages. For the Display trait we finally get to use the compiler to derive the code for us. The bitflags! macro provides a Debug output that is amenable to our uses.
To recognize a sentence in our language, which is a precursor to machine learning, we should define the result of a parse. For this step we will record a Parsed Line for each possible Grammar Vertex that is currently in use. If no grammar rule produces some Grammar Vertex, then we will mark it as inactive. Otherwise we will record the path of Word Usages that produced that active Grammar Vertex. We can also merge Parse Lines that reach the same Grammar Vertex. This keeps the size of our Parse Result from expanding over O(ns) with respect to how many Grammar Vertexes we have defined (n) and the length of the parsed sentence (s).