Directed Graphs, Tensors, and Neural Networks in Rust

Taking Inventory of what we already have

So far we have already built tensors from our WordUsage bitfields. Next we have directed graphs for our RegularLanguage Grammar structures. The difference between neural networks and our directed graphs is that neural network edges also carry a weight. To derive weights in our Neural Network, instead of training them we will hard code them. Each edge E from a node N gets a weight equal to 1/|E(N)|. This simple formula turns our graph into a fully functional Neural Network!

First, before we get too ahead of ourselves, we'll need some persistence for our data structures. For now we can just store everything in human readable text files. To start let us implement a load and save function for word usages. For the Display trait we finally get to use the compiler to derive the code for us. The bitflags! macro provides a Debug output that is amenable to our uses.

impl WordUsage {
   pub fn raise(&mut self, flag: &str) {
      let flag = flag.to_uppercase();
      if flag=="PUNCTUATION" { *self |= WordUsage::PUNCTUATION }
      else if flag=="NUMERAL" { *self |= WordUsage::NUMERAL }
      else if flag=="ARTICLE" { *self |= WordUsage::ARTICLE }
      else if flag=="DETERMINER" { *self |= WordUsage::DETERMINER }
      else if flag=="NOUN" { *self |= WordUsage::NOUN }
      else if flag=="VERB" { *self |= WordUsage::VERB }
      else if flag=="ADJECTIVE" { *self |= WordUsage::ADJECTIVE }
      else if flag=="ADVERB" { *self |= WordUsage::ADVERB }
      else if flag=="PRONOUN" { *self |= WordUsage::PRONOUN }
      else if flag=="PREPOSITION" { *self |= WordUsage::PREPOSITION }
      else if flag=="CONJUNCTION" { *self |= WordUsage::CONJUNCTION }
      else if flag=="INTERJECTION" { *self |= WordUsage::INTERJECTION }
      else { panic!("unknown WordUsage flag: {}", flag) }

impl std::fmt::Display for WordUsage {
    fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
       let out = format!("{:?}", self).replace(" | ",",").to_lowercase().replace("none","");
       write!(f, "{}", out)

Parsing a sentence with a Regular Language

To recognize a sentence in our language, which is a precursor to machine learning, we should define the result of a parse. For this step we will record a Parsed Line for each possible Grammar Vertex that is currently in use. If no grammar rule produces some Grammar Vertex, then we will mark it as inactive. Otherwise we will record the path of Word Usages that produced that active Grammar Vertex. We can also merge Parse Lines that reach the same Grammar Vertex. This keeps the size of our Parse Result from expanding over O(ns) with respect to how many Grammar Vertexes we have defined (n) and the length of the parsed sentence (s).

pub struct ParseLine {
   pub passed: Vec<WordUsage>,
   pub at_node: usize,
pub struct ParseLines {
   pub lines: Vec<ParseLine>,