> In 1956, the renowned cognitive psychologist George Miller published one of the field’s most widely cited papers, “The Magical Number Seven, Plus or Minus Two.” In it, he argued that although the brain can store a whole lifetime of knowledge in its trillions of connections, the number of items that humans can actively hold in their conscious awareness at once is limited, on average, to seven.
So, your mind can hold 4-5, not 7 things at the same time or it goes "out of sync"
...and you may not be able to do a handstand. But if you learn the technique you can easily within a few weeks or less. Memory is no different. Learn the techniques recall and you will be able to remember more.
Do you have any evidences that this applies to short term working memory? That's what the article is about. The limit seems to be 3-7, and it's not clear that's malleable.
You are correct, and I can neither confirm nor deny your parents question, as I've been out of (mental) shape due to the time of day. I don't think I did, las time i tried.
I just wrote up a human memory section in my dissertation a few days ago.
There seem to be two more widely accepted limits to working memory than George Miller's 1950s limit: (A) 3-5 chunks [1] or (B) it depends greatly on the context and type of chunk (e.g., visuospatial) .
[1] N. Cowan. Metatheory of storage capacity limits. Behavioral and Brain Sciences,
24(1):154176, 2001.
I'm curious how this might apply to your average tech worker/programmer type person ... Are there common missteps that we do which overtaxes working memory, and how can we adjust our working rituals to mitigate?
Open plan offices are one such misstep. We could adjust our working habits to be more privacy focused, preferring to err on the side of assuming that most hours of the day need to be quiet, private hours in a private setting spent on contemplative work, especially for collaborative work.
It should be seen as rare and exceptional to need hours spent in meetings or group settings where dynamic real-time audio is required. That is a form of collaboration needed much less frequently than asynchronous collaboration built out of individuals taking a more contemplative and personally customizable, adjustable approach to organizing thoughtful work habits that don't disrupt others.
(Note: I'm not saying I support this particular study on working memory, just that open plan offices are a problem in this area of productivity, whether or not this particular study turned out to be mostly click bait or not.)
In my opinion unnecessary abstractions -- especially splitting code up into different files -- is a tax on working memory and leads to difficulty achieving flow state. Every layer of indirection takes up residence in working memory and so it had better be necessary, else it should be omitted even at the cost of some code duplication
> In my opinion unnecessary abstractions -- especially splitting code up into different files
It gets especially bad when someone else wrote that code and you're just getting into it. Layers upon layers of abstractions scattered across classes, functions, files. Each add very little on their own, but you kinda have to keep it all in your head (or write it down as I do) if you're planning to grok it and fix that bug.
It's kinda like reading some piece of code not in its final form, but as a pile of diffs are applied to it. Fun. :-)
I find older style procedural code with longer functions generally much easier to get into than any OOP.
As a counterpoint, I find breaking code up into separate classes and functions to help greatly with minimizing what I need to keep in my working memory.
I don't often need to know what every single line does exactly all at once, nor can I actually keep that much information in my head anyway. I'd rather be able to say "that's the function that frobbles the subductor, it's 5 lines long and right now I don't care how it does it.",
Sure, the bug might be in that function - but if it's only 5 lines long and does one simple thing, I can write a bunch of tests for that one thing and work it out. When the code is broken up like this, I can keep an even larger system in working memory all at once.
I think the "functions should be small and do only one thing" maxim is important but needs to be paired with "things should be small and done in only one function." The more you split logic across multiple functions, the more likely that the bug won't be in any one function but in the way that you composed them.
This is one of the most astute criticisms that I've seen of coding style, along with your earlier post regarding locality of reference and duplication of statements vs duplication of intent. I think these are really great principles for writing clear and beautiful code. Are these ideas something that you've developed from experience? Is there a wider context and conversation around this kind of thing, that you know of?
Locality of reference isn't just important for CPUs!
I worked with a guy once who loved to split everything into tiny classes made of tiny functions, each in its own tiny source file. You'd hop through four or five different files to do the simplest thing, and it was mindbending to try and debug.
Also with regards to code duplication, it's important to distinguish duplication of statements from duplication of intent. If the code's actually doing the same thing then sure, collapse it down into a function. But if the intent of each piece is different and they just happen to involve the same operations, then removing the duplication is adding a dependency to both places, and may very well not be a net win.
Maybe this explains the appeal of single-file Vue components. Having the template, logic, and styles together in one file makes the overall component much easier to reason about.
We can get rid of git which is a massive tax on our thought process.
Or rather, we could have a considerably simplified version, with a very small set of commands that are consistent with one another, and 'undoable', and a simpler conceptual model - and only allow admins/superusers to touch 'real' git.
I'm deadly serious, I don't care how brilliant a dev is, I find that git's conceptual model, the variety and inconsistency of the command patterns, and the variety of usage models add up to an unnecessary mental tax on dev.
I don't mean to turn this into a git discussion, but I suggest it is an example of one of many 'sneaky taxes' that effectively introduce otherwise reducible complexity on all of us.
"Miller thinks the brain is juggling the items being held in working memory one at a time, in alternation. “That means all the information has to fit into one brain wave,” he said. “When you exceed the capacity of that one brain wave, you’ve reached the limit on working memory.” "
File it under "path dependence," Pops. TIE (This is Evolution.) (a la "TIL")
It's interesting - we have delay line memory effectively, but rather than bits the individual chunks we can carry are these big fuzzy nebulous concepts which possibly encode quite a lot of information in one chunk - albeit with many errors.
Consider that there are ways to keep semi-coherent non-quantisizing thoughts. You can sythnesize such an abstract concept if you step through a list of 'named' concepts that are overlapping with it, to generate the combination of all the 'named' concepts. You can even associate one such abstract concept to another, like when you learn a single vocab, but the repetition you need to enable long-term memory has a strong impedance mismatch with the way you generated that abstract concept.
Also, there are ways to form thought structures that you can't use a normal single-step debugger on, as the intermediate states have no useful interpretation. They are however much, much more capable, e.g. able to provide you with the ability to scan through isles at a large store, while walking reasonably fast through it, turning your head left and right. Your eyes scan rough at first, moving to get more detail for those areas in an isle that though process wants more detail on. Due to the inherent delay/reaction time, this needs an interleaving of about 3~6 steps delay between subsequent, sequential viewings of the same area, if all these viewings are decided with full knowledge of this area. The higher the interleaving, the higher the load on this working memory, but the less multi-viewings without intermediate deciding have to be done of the same area, as you can't handle more areas than this interleaving factor at the same time.
A nice aspect of this fuzzy nature of associations is that you can directly combine fuzzy associations from two such abstract concepts you can synthesize. Don't refine those concepts too much though. Compare e.g. the visual impressions from one historic city center with those of another. Don't try to list each and compare them one-by-one, try to get a fuzzy state of non-individually-enumerated-at-any-point visual impressions from one city center, place it to the side (you can imagine it on a side in a virtual space with rough directions relative to your brain, but don't try to relate it to the physical world, or you snap out of the coherence and possibly loose part of the memories), and then gather the same for the other city center on the other side. Maybe switch between them a few times, like 2 or 3 times each, and then just dissolve the seperation, e.g., drop the association of which side the concept was on, and just all the other mental-space location information. Just consider it no longer important, don't think about it in the moment. Prepare how to drop one of like 4 or 6 or so low-complexity semi-abstract thoughts (there should not be a linguistic expression accurately describing it that is shorter than 7 syllables, this counts for each though separately), before you do the fusion. If you then run through the things you get if you brainstorm small linguistic expressions (like maximum 5 syllables, preferably fewer) or visual things (that are drawable to recognition in under 30 straight lines of finite length, to the visual recognition skill common in Pictionary), you get what both of these city centers have in common, with much of the sampling done on the combined probability distribution this essentially is. The reduction in noise/errors is related to what a quantum computer does, but the limits are sadly much lower.
Be careful, you might like to use such to get rid of (some) emotions, and that can hit some feelings like hunger/thirst without trying to. It usually takes years to get a good handle on those after you loose them.
Manipulation of these lower-level/monolithic though processes can be done by creating a self-feedback one that is trained to tell you as a one-dimensional, non-quantisized "feeling" (you probe it similar to how you consciously probe a specific bodily sensation, but by asking for an abstract concept (naming it creates too much overhead, as you don't need to directly refer to it from linguistic communication) instead of a region of your body. Like how you can feel how dry your eyes are if they are dry, with less quantisation than what you'd use if you try to put it in words, and with less fudging than if you try to put it in numbers (even it those have decimals).
Is there evidence of correlation between the size of working memory and brilliance/genius? Or is a universal human mind limit?
Although, for any even moderately complex problem, abstractions or chunks of some kind are necessary, so proficiency in recognising, using and creating them is more important than holding one or two more, being able to hold more like help - especially in managing them. Perhaps help a lot, and the development of some abstractions may actually be impossible without greater working memory.
The trend of turning untested hypotheses into clickbait headlines is getting really bad. There is little evidence so far that the headline is true, it's just one hypothesis.
Not nearly as bad as taking one experiment and extrapolating as conclusive. To clearify, there isn't enough retesting of experiments to make good science from.
Or when an experiment's results have been challenged or disregarded yet people continue to recall the first experiment's results. For example, the Stanford prison psychology study.
However, yes, poorly constructed headlines meant to attract a click rather than true information is still bad. Though that might be a fault of the collective being more interested in snipping headlines without reading the articles behind it.
I work in human memory neuroscience / linguistics.
1. The 7-plus-or-minus-two study is entirely bunk. Welsh words are longer and yield < 7. This WM capacity is about verbal rehearsal.
2. Here's background reading on the complexity and debates over oscillation functions: https://neurophysics.ucsd.edu/courses/physics_171/Buzsaki%20...