Start with an example: Comparing the conventional and the digital photography.
The conventional requires huge amount of physical material, and records with physical tools. Requires later processing, but that is available, the magnification is possible because of the physical representation (requires fine lenses and photo material).
Digital photography digitizes the image on the input. Requires no physical material, only the infrastructure. Later processing is very limited, because the granularity of the image depends on the decision made at the creation of the image.
We have the false belief that our brain is like conventional photography - and for some people, it really is (gifted autism). So we have a proof that the information can be stored in human brain "as is", but using that information in our daily life still requires post processing: pruning, separation of usable and irrelevant parts. Autists are excellent at recording, but are completely lost in normal human life where they should use processed, simplified, "softened" information: practical knowledge.
For a normal human thinking, the pruning is done on the fly, while sorting out our daily experiences, most probably, during sleeping. When we wake up, the pruned patterns are recorded in our long term memory, by small changes in it, while the unnecessary parts start fading away. In the end, we actually "recall" memories by building them from our knowledge and very faint and blurry segments, how it should have been, and believe that it happened so.
This is why trained mind is so much important: the quality of the initial "digitization" or rather: "patternization" depends on it. The brain of an experienced viewer, automatically decomposes the situation or tasks to just parameters of existing patterns, instead of trying to hold as much information as it can for later processing. Or in other words: the previous processing time is reused in the preprocessing; and the current actions are used to refine, improve or change the current pattern set, and yet again, increase the preprocessing quality. The same way, the recalling process can be much more efficient - by rebuilding them from the parametrized patterns.
What can we say about the quality of his record and rebuild? It depends on the quality of the pattern set. If it is "good", the patternization is good, if it is wrong, the recall is worse than just taking the picture.
How can a pattern set be good? As much as it can be improved, adapted to the experiences. It depends on its structure, or more precisely, its granularity.
The initial model is always wrong, because it also builds on memories that were filtered by initial and naturally wrong concepts. The original monolithic concepts breaks up to smaller parts by new, conflicting experiences, and it has to be rebuilt from the fragments.
This is the key: how to find the proper separation lines between the components, and how reliable those components become, how reliably we can validate the separation and rebuild the whole structure from the new components.
This ability can be improved only by constant practicing, not accepting partial results, although knowing that the results will always be partial regardless of the best efforts.
Programming and architecting is a best training ground for this. First, we must learn not trusting our initial perception and output quality (source code errors), and also accept external, objective barriers (the language, components and environment where we work).
Then, we meet the constant need of refactoring our codes, because although they work fine, their structure does not allow reusing, because of the too tight dependencies among them. Getting used to it unconditionally changes our complete approach to the world and ourselves, and is applied automatically to everything we touch.
We gradually understand that the initial organization of our codes and component, the articulation of the initial requirement is the key to a good design. From refactoring the existing components (post processing), we get to refactoring the requirements, the understanding of the task (preprocessing). In this way, as programmers, we don't meet "hard tasks" anymore: if the requirement can be solved by computers, it is also easy to solve by the existing components and patterns. The initial patternization of the task is fast by decomposing it to known components and their network, and also recalling the requirement, running the actual software, applying the analysis to the actual events becomes very precise.