I keep reading articles about how you need to fit mental models of code in your head with analogies to spatial maps. This is not how your brain processes these. You have a spatial center mapping 3D objects to literally mini-3d-models encoded in neurons. You can grasp some(!) code with this if the structure is similar to what you code yourself, but most of the code in larger bases is a ruleset like your countries taxcode - it will only fit in your language processing center and need a lot of working memory.
Now some people might be able to fit more than millers number 7+-2 there and juggle concepts with 20 interconnected entities, but this is mostly done by people having this as their main work / business logic.
These articles mix up same-form dimensional mapping like audio or visual to distinct data, it's similar to why its easy to replicate audio and images, but not olfactory / smell. Your nose picks up millions of different molecules and each receptor locks onto a certain one.
Thinking you can find general rules here is exactly why LLMs seem to work but can never be inductive - they map similarities in higher dimensional space, not reasoning. And the same mix up happens here: You map this code to a space that feels home to you, but it will not apply to reading another purpose software outside your field, a different process pipeline, language or form.
If your assumption would be correct all humans needed to train is reading assembly and then magically all bugs will resolve!
Maybe if you want to understand code with both hemispheres map it to a graph, but trying to make strategies from spatial recognition work for code is like trying to make sense of your traffic law rules by length of paragraphs.