In this latest study, from Mesulam's lab, there's a pretty illustration of what goes on when we read. When we read words, we see the group of letters in front of us, we recognize the sound those groups of letters are associated with, and we recognize meaning that that group of letters is associated with.
The brain is quite a computer. It uses slightly different brain networks to 'see', 'hear', and associate with 'meaning', but the three networks also share a common brain area (left side of brain) as well. In the figure below, the panels show brain activation patterns from top to bottom, of anagrams, homonyms, synonyms, and finally the area of brain activated by all three.
What the pictures show is why it's possible to have different word processing problems that selective affect how words are heard (phonology), seen (orthography), or understood (semantics). Some of this news will be reassuring to dyslexic individuals who have been badgered into literacy programs that use the wrong approach for them (phonics when they are struggling with orthography, or orthography when they are struggling with phonics, etc).
In the next era of neuro-education, the challenge will be to see whether teaching strategies can be as specifically designed for students as some of these scans. Also, we'll have to see if the cognitive pinpointing of the source of reading and writing difficulties gets better with better education of teachers about brain-based differences in their students' learning.
Language and fMRI