It's an interesting exploration of ideas, but there are some issues with this article. Worth noting that it does describe it's approach as "simple and naive", so take my comments below to be corrections and/or pointers into the practical and complex issues on this topic.
- The article says adjacency matrices are "usually dense" but that's not true at all, most graph are sparse to very sparse. In a social network with billions of people, the average out degree might be 100. The internet is another example of a very sparse graph, billions of nodes but most nodes have at most one or maybe two direct connections.
- Storing a dense matrix means it can only work with very small graphs, a graph with one million nodes would require one-million-squared memory elements, not possible.
- Most of the elements in the matrix would be "zero", but you're still storing them, and when you do matrix multiplication (one step in a BFS across the graph) you're still wasting energy moving, caching, and multiplying/adding mostly zeros. It's very inefficient.
- Minor nit, it says the diagonal is empty because nodes are already connected to themselves, this isn't correct by theory, self edges are definitely a thing. There's a reason the main diagonal is called "the identity".
- Not every graph algebra uses the numeric "zero" to mean zero, for tropical algebras (min/max) the additive identity is positive/negative infinity. Zero is a valid value in those algebras.
I don't mean to diss on the idea, it's a good way to dip a toe into the math and computer science behind algebraic graph theory, but in production or for anything but the smallest (and densest) graphs, a sparse graph algebra library like SuiteSparse would be the most appropriate.
SuiteSparse is used in MATLAB (A .* B calls SuiteSparse), FalkorDB, python-graphblas, OneSparse (postgres library) and many other libraries. The author Tim Davis from TAMU is a leading expert in this field of research.
(I'm a GraphBLAS contributor and author of OneSparse)