After much procrastination, I am finally posting the results of my research into how the frequent use of computer programming languages affect the brain.
As followers of the blog will remember, after reading theories about how learning languages affect the brain, I wanted to know whether computer programming languages also affected the brain. With a few notable exceptions (e.g. Murnane, 1993) most research about the cognitive effects of computer programming seemed to have focused on programming as a problem solving rather than a linguistic activity. If computer languages were indeed languages, I thought that it would make sense for them to affect the brain in a similar way to other languages.
This is quite a big subject, so I honed in on bilingualism. If computer programming languages are languages, then people who spoke one language and could programme to a high standard should be bilingual. Research has suggested that bilingual people perform faster than monolingual people at tasks requiring executive control – that is, tasks involving the ability to pay attention to important information and ignore irrelevant information (for a review of the “robust” evidence for this, see Hilchey & Klein, 2011). So, I set out to find out whether computer programmers were better at these tasks too. It is thought that the bilingual advantage is the result of the effort involved in keeping two languages separate in the brain and deciding which one to use. I noticed that novice computer programmers have difficulty in controlling “transfer” from English to programming languages (e.g. expecting the command “while” to imply continuous checking; see Soloway and Spohrer, 1989), so it seemed plausible that something similar might occur through the learning of programming languages.
Obviously, lots of bilingual people learn two languages from birth, and – young as some of them might be – this is clearly not the case for computer programmers. However, the effect has also been found in bilingual people who gained a second language in adolescence (Tao et al., 2011). Therefore, I compared 10 adolescent programmers (aged 14-17) and 10 professional programmers who had been programming since adolescence (aged 21-25) to age matched controls. All participants were monolingual English speakers.
I used two computer-based tests – Stroop and attention networks task – which required participants to pay attention to one thing and ignore others. Unfortunately, I made an error in setting up the Stroop test, so I can’t be sure that data is reliable. However, data from the attention networks task showed that computer programmers performed this task faster than controls and the difference between the two groups was significant (note that “significant” means that we are more than 95% sure that the results didn’t happen by chance – it doesn’t mean there was a big difference in reaction times). Error rates did not differ significantly.
This was only a very small study and I’d be reluctant to make any grand claims based on it; the results would need to be replicated by other studies. Even if other studies did support these findings, it doesn’t necessarily mean that computer programming experience causes better executive control; it might be that people with better executive control are more likely to persist with computer programming. If the latter is true, one way to help people learn computer programming might be to teach them a foreign language first. I did notice that a lot of programmers seemed to be bilingual – I think it would be interesting to do a survey to see if that is supported by hard facts.
Anyway, if you’re interested in picking apart the detail, you can download the full report here. If you do manage to struggle through, let me know what you think – random theories welcome! I’d love it if it gave people ideas for bigger studies. Huge thanks to everyone who took part and to all those who helped, especially the team at Young Rewired State. I couldn’t have done it without you all.
You have a little typo.
“(note that “significant” means that we are more than 95% sure that the results happened by chance – it doesn’t mean there was a big difference in reaction times)”
You are missing the negation, the results didn’t happen by chance.
What about the hypothesis that people who can pick up programming just have a stronger mental ability on average? It’s not that the programming helps with the task, it’s that those who can do the task well tend to also be able to pick up programming. It would be good to do this experiment on kids before and after they take their first programming class.
Thanks for pointing that out – quite an important typo! I’ve amended it in the text. And yes, it’s quite possible that people who succeed in programming have certain cognitive advantages. The problem is, no-one has successfully pinned down what this is (more on that here). I agree that the kind of experiment you suggest would be a good idea as it’s not possible to determine the direction of influence, since learning to programme is a choice – unlike bilingualism, which is usually thrust upon children due to circumstance, which is why it’s thought that bilingualism must be causally related to better cognitive control.
[…] Hannah Wright has an interesting study on cognitive advantages of programming, which she find are similar to those of bilinguals: […]
Hi Hannah, interesting stuff!
Another possible route to explore is the notion of code as mathematical narrative:
Thank you. I’ve got hold of a copy of your paper and I’m looking forward to reading it over Christmas 🙂
“If computer programming languages are languages…” = bad assumption. It’s unfortunate that the word “language” is used to refer to programming languages. It tends to lead one to believe that there is a relationship between the two that doesn’t actually exist. Another example, “Intelligence” and “Artificial Intelligence”. I’ve been programming since I was a child (40 years ago), and in spite of being top in my field, it never once made the study of foreign languages easier. My brain was already logical, so spoken language syntax was easy enough to grasp, but that is only a small aspect of language. It wasn’t until I actually moved to another country at the age of 42, and put in a lot of hard work and time before I actually learned a second language well enough to understand it and think in it. Years later, I was finally more able to grasp languages at a deeper level and learning new ones is not as hard. On the flip side, I have worked (and continue to work) with many, many bilingual (and trilingual and quad-lingual?) programmers, and never once did I get the impression that there was any correlation to their programming abilities because they were bilingual. Sorry, my personal experience denies any connection.
The article makes none of the claims you mention. Also, anecdotal evidence such as yours is totally different from a controlled study.
Programming languages ARE languages, they have rules of structure, syntax, and grammar. There are common conventions and idioms. All of these concepts have parallels in other languages. Just because they are man-made or derived doesn’t mean they are any less a language than one that arose from years of use.
Totally Right, a programming language follows every rule a “normal” language follows, both can be perfectly represented in the form of trees, and have lexical, syntactic and semantic rules, very complex ones for both. And also, like on “human” languages are verbs, adjectives, etc. needed to correctly “speak” there are IDs, control words, actions, operations, types… in programming languages.
I agree with some comments below:
– it’s not that programming makes you able to learn other languages just like that, it’s more like, you already have something that makes it easier for you
– that something also makes you very attracted to programming (my personal theory is because of the logical structure and view of everything, from applying abstraction to the very different way programmers see the world as compared no non-programming people)
– after learning to program, it gets even easier to learn new languages (again i’d say it’s because of the logical structuring of everything that happens in your brain)
That’s the way i see it. I’m a Computer Science Bachelor from Venezuela, my mother tongue is Spanish, i also “mastered” English like at 12 years old, learned to program at 17, with imperative, functional and logical programming paradigms, also learned Portuguese a few years ago.
Your personal experience is irrelevant. The quoted fragment is a premise for research, not an assumption, and it is borne out by preliminary research.
The word “if” indicates a hypothesis that requires testing. You make an assumption that programming languages aren’t languages, yet your assumption is based on shaky ground.
You say that programming hasn’t made learning other languages easier. Well, knowing French doesn’t mean you’ll pick up Japanese faster, so what’s your point?
Let’s take some features of a language:
A language has syntax.
A language has grammar.
A language can make assertions.
A language can form definitions.
A language outlines procedures.
A language communicates.
Unless there’s a feature I’m missing that distinguishes a programming language from a natural language besides the fact that natural languages aren’t deterministic… I think there’s a strong case for saying that programming could be a language… as far as the human brain is concerned. Hence… experiments. TO THE LAB!
Go ahead and state what you just did in a programming language.
word[‘if’] => testing(‘hypothesis’);
while(assumption.You == shaky.ground()){make.You(‘asumption(‘programming languages arent lanquages’))}
say.You(‘programming != easier(learn(language, other));
if (knows_french($person) == true){faster(learn(language, japanese)) = false}
$language_features = array(‘has syntax’,’has grammar’,’can make assertions’,’can form definitions’,’outlines procedures’,’communicates’);
if (feature not in $language_features && distinguishes(‘feature’) == true&& distinguishes(‘feature’) != ‘non-deterministic’) {die()} else {
think.I(stong_case(programming ~= language)) WHERE ‘human brain’ == true}
to_the_lab(experiments)
Reblogged this on a record of a developer and commented:
Very interesting
@Anthony, This wasn’t assumption on behalf of the author, rather it is the statement which he is attempting to test.
Personally I do find better programmers are better at languages, from a syntactic and grammatical point of view. They appear to be able to recognise rules and where these rules are broken even without knowledge of the particular language at hand.
[…] via Evidence suggesting that young computer programmers have “bilingual brains” « thecodingbrain. […]
I feel that aptitude for solving proofs in Euclidean geometry is a good indicator of feeling comfortable with programming. (and probably the converse as well. I programmed long before geometry class).
They both follow the model of
Input -> Series of logical operations -> Output
Some studies that test proof solving in geometry with prior programming knowledge vs. no programming knowledge might be interesting.
Alternative 1: with no programming knowledge, succesful proof solvers could be followed for a few years to see how they do when learning programming later.
Alternative 2: Start with a very young group that knows neither geometry proofs nor programming. Teach a group one of these items. Then test both groups on the other. Could work with spoken languages as well.
Boolean algebra proofs and expressions would also make a good testing platform. If you can add, subtract, and multiple then you should be able to perform simple boolean algebra with a few hours of instruction.
Good thoughts here. I would expect a lot of similarity between learning computer languages and human languages. Although, to be sure, it takes much, much, more time to learn another human language well. I’ve got an unusual background – M.A. in Applied Linguistics and taught university-level English as a Second Language for 10 years. Then a programming degree and doing that for 13 years.
I think how the brain processes the two types of languages is very similar. Example – if you already know Spanish you will learn Italian quickly but the Spanish won’t help you in Japanese. Knowing Java will be more a benefit to learning C than it would be to learning COBOL. Also if you know multiple languages you are more likely to mix up your Spanish and Italian that you would be to mix up your Spanish and Japanese. Likewise you are more likely to try to do a Java thing while writing C than you would be if you were writing COBOL.
But, will get back to the point that it is harder to learn a human language well. I think if young kids (pre-school, grade school) learn two-languages as they grow that they will have the ability to be good programmers later. On the other hand if you start teaching your 3 year old to program, they may become a great programmer, but I don’t believe that will go as far to helping them learn Spanish in high school. Learning a human language creates a far more robust “structure” in the brain, which computer languages can easily slide into. On the other hand, learning a computer language first doesn’t create a cognitive structure with enough complexity to later overlay a human language on top of that.
So make sure your kids seriously study a human language first (or at least along side programming).
I’m interested in how the brain treats computer languages versus how it treats natural languages. I’d like to see the MRI results on that!
Me too. Sadly I have yet to persuade anyone to hand me the keys to an fMRI scanner. I live in hope though.
[…] (via thecodingbrain) […]
Just a thought: programmers can be bilingual because they have their mother tongue and English is very much the lingua franca of programming.
well, let start this thing like this:
For the people that don’t had heard the spoken language it is difficult to learn to speak. Well in this case, there are many cases of this in nature, when You have animals that have kept the children. The thing is, those kids cant develop good language skills.
my point is, if dont develop some centers of your brain you are becoming like retarded for that area, and you are without means to learn that skill.
Yes, there ARE SOME other points of view but this had been naglekted for long. And yes , it looks like those things can go in to your genes, so for some time the kids of your kids will not even have those genes at all…
Well, that one goes for education and so on
A key theory/finding within the field of linguistics is the idea that as children we have a physiological capacity for language that disappears at around ages 7-9. It’s called the Critical period hypothesis, and is based largely on the anecdotal evidence of feral children. Depending on the age at which they are integrated into society they will either pick up or not pick up language.
This also applies to true bilingualism, which is an even more interesting topic. A child raised with exposure to two languages develops two separate vocabularies at the same rate that another child might learn one. Again, this ability peters out with age.
I’d argue that this study should take note of when a child starts learning a programming language, as the age they start may well be an indicator of their later fluency and capacity for additional learning.
As for the discussion as to whether a programming language equates to a spoken language, I’d argue that in regards to learning it largely is. There may not be intonation to learn, but there are still meanings, and a great deal of syntax to learn.
At school I struggled to learn French and ultimately failed to pass my GCSE. Some 25 years and numerous programming languages later I needed to learn some German whilst working with a subcontractor on the German/Swiss border and was surprised how relatively easy I found it to get a basic functional vocabulary and ability to understand spoken German. I’m now wondering if having to learn a new language every 2-3 years was a factor in my perceived improved ability to learn a human language.
I think I’m just the person your study tries to analyze.
My native language is portuguese, but my parents put me to take english classes before my adolescence and I was fluent english speaker by the age of 17.
I started programming by accident. I found a programming IDE software (VB 5 at the time), while I was looking for a drawing software. I was 15 years old and became instantly hooked. From there I started to learn on my own through books how to program in several language and frameworks.
Today I am 29 years old and consider myself someone who lacks the ability to multi-task.
If I’m performing some brain intensive tasks, one can speak to me for minutes that I won’t hear a word.
If I am parking my car I also can’t hear anyone or see anything else that is not relevant for the parking task.
People usually are impressed by my programming and parking skills 🙂
So maybe I’m not bad at multi-tasking, maybe I’m good at executive control :D.
I believe this: personally, I speak a few languages, and it appears to me that programming languages are similar to spoken languages in the way they affect the brain: I’ve dreamed in Pascal and in C#, for example, and I’d say that’s a fairly strong indication that the concept is valid.
When I’m sitting down to write a program, it’s less about figuring out the problem and determining a solution than it is about explaining to the computer what I’m thinking about: that’s how it appears to me, cognitively – of course, in reality, I’m creating algorithms, thinking about efficiency and so on… I think… at least, that’s how it ends up… but in terms of the experience I think I’m having, it’s like sitting down and having a (fairly one-way) conversation with an old friend.
I don’t know too many people who think about programming this way… but the ones who do tend to be very, very good at it. I’ve been programming one device or another since I was 13 years old: I can state fairly unequivocally that learning to program at that age did restructure my brain in some way, such that either programmer-thought seems to apply itself to everything I do, or the universe appears to work in such a way that it can (largely) be thought of as a series of algorithms (or both). Appreciating randomness in reality comes later: without that, one ends up pretty dysfunctional… but I know just from how it feels to me that when I’m explaining something to a person (this post I’m writing now, for example) I’m using the same bits of brain that I use when I program.
In programming, I don’t see problems, algorithms, structures, methodology, conventions, or anything like that: those things are for people learning to program, just as remembering grammatical rules is for people who are still learning a spoken language and can’t yet speak it instinctively. Of course, my programs contain algorithms and structures and use certain methodologies and conventions… but that’s because those things tend to end up being very similar to the things I’d say to the machine anyway.
Do I break the rules? Probably – but in reality there are no rules: there are simply good ways to say a thing, and the rules for programming which have emerged over the decades are not much more than codifications of proper linguistic practice, just as “I before E except after C” may well be a rule in English, but someone good enough at the language doesn’t sit there thinking about it when he’s trying to spell “piece” — he can simply see a word and know whether it looks wrong.
The intelligence thing mentioned in another reply to this article definitely comes into it: without trying to appear arrogant, I feel able to state at this point, having interacted with a lot of programmers over the 30-something years I’ve been doing this, that people with an IQ under 150 suffer in terms of their ability to program as a natural and instinctive function of the way they think: at a base level, of course intelligence is involved: programming is, above all, a logic problem, and if you’re not too smart then you’re not going to be very good at it: you find them fighting the computer rather than sitting there, chatting naturally to it, with the logical bits of what they do having, as it were, been taken care of automatically as a function of their ability simply to see the answer as obvious.
I’m probably going to get a lot of abuse for the previous paragraph, but I stand by it nevertheless: to reiterate: I’m not writing from some form of bias, but purely from experience in dealing with other programmers over the years. Younger programmers are going to hate everything I’ve just written: to them it will all appear to be nonsense… but it isn’t.
@Dan, you may want to check the range on that IQ thing. IQ is (by definition) centered around 100 with a Std Dev of 15.
http://en.wikipedia.org/wiki/Intelligence_quotient
You say: “people with an IQ under 150 suffer in terms of their ability to program as a natural and instinctive function of the way they think”.
What you’re saying is that > 99.9% of the population, basically have limited capacity for programming. While I agree that IQ is definitely a factor, I would place the bar much closer to 105 or 110.
Aside, as we move into an age of self-automation I really want to lower that bar even lower. We should really be enabling people with IQs of 80 to perform some form of basic programming.
I know that 100 is the average. I put 150 at the lower limit on purpose: I think it’s true… from what I’ve seen, under that level, people can program but it’s always a forced interaction for them. Just experience talking… but there’s a difference in thinking which seems to split at about that point. I’m not saying that people with lower IQs can’t program (although I think 105 is amazingly generous) – I’m saying that it doesn’t work the same way.
…and yes: 99.9% of the population have limited capacity for programming. One in a thousand makes a great programmer. That sounds about right to me.
@Dan. Let me guess, you believe your IQ is 150+ ?
BTW. How do you know the IQ of all these inferior programmers?
Yes – my IQ is way above 150 (tested). As far as the IQ of others, it’s based on observation of those who did actually know their IQs as a result of their also having been tested. You may remember that, back in the ’70s and ’80s, there was a trend for parents to have their kids’ IQ scores measured…
Sigh… lousy programmer here. Only 127, dang!
@Dan, this rings a lot of bells with me! I came to programming “late” (in college back in 1977), but the mental stance you write about is very familiar to me, and I’ve experienced similar views to yours regarding other programmers. For whatever it’s worth, I supposedly have a very high IQ, too (I was told it’s in excess of 160).
I’m fluent in about a dozen programming languages, very experienced in about 20 others and I can find my way around in a lot more. On the flip side, other than English, I seem to have no facility with human languages (this may be due to a congenital serious hearing defect).
I think 99.9% might be a bit high, but it’s definitely the case that not everyone can program at all, and very few can do it well.
It’s definitely an interesting thing. Over the years I’ve formed a hypothesis that you’re born either able to learn to program or not able to learn: something about some deep structure in the brain — it’s difficult to tell whether it’s a built-in thing or something environmental which gets installed before you reach five years old (at which point, apparently, one’s mental processes become somewhat less fluid and begin to fix themselves into forms which persist throughout one’s life: the “basic religion”, if you like, with which one views the universe).
From that comes an even more fascinating (if slightly silly) thing: if programming is indeed hard-wired into us (or not) then it implies that we’ve evolved to be able to program, on the understanding that at some point in our evolution as a species, computers would exist… and of course, from there, it becomes steadily more ridiculous!
I suspect the natural ability to program (and I agree completely such exists — when I encountered it for the first time late in college, it was like I’d discovered sex + chocolate + rock and roll + drugs all at once; took to it like a duck to water is my point) is more based on mental precision, ability to handle detail and non-linear thinking.
That last may be a key one. Programmers have to think about what the user (or other input) can and might do. We naturally explore the “phase space” of possible futures (within the constraint of the application). We then provide instructions for as many future contingencies as we can. We need to consider LMNOP when others are working through ABCDE.
Non-linear thinking is a special case of abstract thinking, which is a vital skill to programmers. As you say, we naturally “speak” our instructions in terms of data structures and class hierarchies, but those are just the reifications of our mental abstractions. It is the ability to think in these terms that marks a true programmer. For undergraduates, their first data structures class is often one place that separates the wheat from the chaff. Many drop out at that point, or find it their most challenging class. Naturals eat it up like candy.
As I write this, it occurs to me that programmers spend a lot of time naming things. I suspect we make up more names in a working day than most people do in a year (or more). I wonder if that’s a key ability as well. Some people have a knack for names. Some don’t (you know the one with that guy in it where they’re on that boat and they do that thing).
In the end I suspect our natural human drive for precision and understanding (aka science) and detail drives our programming. The genetic programming was long in place and getting here was inevitable.
Consider where we might be had we been more fascinated with cell biology and genetics than in clock mechanisms and electricity. Programs might be things we grew in a Petri dish.
Well that is noting new, it is all the same way.
It is about the brain, it has many parts that are used to solve certain things, and that’s it. No magic and other things… It is just the way the brain works, it will take more time before humans understand how the brain works.
Some things like brain development are very imoportant when it commes to education of jung people, children and adults, even the seniors.
We are all unique and that disstinguish us from each other, but in the way we all are limited by our genes, education, klimatic conditions and so on….
One day we will reach our maximum point and the only solution in that situatino is very hard to know.
There are some other things but it is very important and hard to undestand, sometimes I even ask my self is it Ok to even there to solve some problems like that. It is not like ax+b=0, it is vay more difficul to even understand what it could be….
I concur. I’ve long been a proponent of non-linear thinking – or “lateral thinking” as I’ve always thought of it: tests I give to programmers in interviews require it to find the simple solutions I’m looking for. To me, it’s part of the art form that programming is: you have to be able to do something more than a methodical plodding away: from lateral thinking comes inspiration and insight, and to be a great programmer (or anything else) you require that ability. You may be very interested in the works of Edward DeBono, who coined the term “lateral thinking” and who has spent most of his life promoting the concept and writing books about it.
Hi,
May I help with this project in some fashion?
Here is a bit of my background:
-I am 30 years old.
-I am a Computer Scientist.
-I began programming when I was about twelve.
-I started using computers at the age of three via a command line interface on the Commodore 64.
-My parent’s are both in psychology.
-My father claims he taught me German by the age of three, but I had forgotten it all by the age of four. I have no memory of this.
-My first programming position was at the age of 16.
I have spent a great deal of personal mental energy trying to discover what makes my thought process different. I’ve come to a few insights regarding this.
-The first is that you can teach anyone to program, but you cannot teach anyone to program effectively.
-Programming is less about the problem and more about the framing of the problem. People who cannot organize the data in their mind cannot solve the problem.
-Programming is never about having the answer. It’s about having the means to find the answer.
-Good programmers have good “minutia memory.” They remember the trees but forget where the forest is. Great programmers have good “gestalt memory.” They remember the forest, but ignore the trees. Legendary programmers have no memory what so ever. At least not in a reliable sense. Their memory has turned to creativity. They invent the problem, the solution, the syntax, and the grammar on the fly.
-The worse someone’s memory is, the better they are at solving problems. The reason for this, I hypothesize, is because you can function in two states: rote and discovery. There is a bit of rote in discovery, and there is a bit of discovery in rote, but you must prefer one or the other. You either access solutions or calculate them. People who are bad at rote, for reasons of survival, become very good at discovery. This is not to say that you can’t be good at both, either. But in any one instance, you’re either rote or discovery. If you discover more than memorize, of course you’d get better at that.
-Your personality changes as you program. Significantly. If you program 8 hours a day for two weeks, your personality changes much in the same way as if you were writing a story 8 hours a day for two weeks. You take on the traits of your work. If you do this for long enough, I believe it becomes permanent.
-People see developers are argumentative, unforgiving, and cold. Developers see developers as argumentative, unforgiving, and cold. Developers see themselves as rational, just, and even. None of this is true. Developers are deluded into thinking they are superior by the nature of their work, so they require stronger proof when incorrect. They are not argumentative, they are seeking just like everyone else. They are not rational, they’re just as flawed as everyone else. They are not unforgiving, they are just idealist… but they are not just either because they hold their ideals to a higher degree of confidence and value. And they’re not cold or even. They’re just calculated when seeking an answer.
-Introverted iNtuitives will stick with programming more than EN’s, ES’s, or IS’s.
-Almost every single person who programs long enough will morph towards being an INTP or INTJ. There are some who are successful INFP’s, but they are not understood by their peers and do not do quite as well in the long run. Extroverts exist, but they are found to be difficult to work with. S’s can’t program. Sorry, haven’t met one who could. Not a single one. They make great bug testers, though.
-Most programmers hate their jobs after a while. They start it to solve puzzles and build things. As you progress in the field, you get bored coding the same type of thing over and over. The number of “interesting” problems that come by your desk gets smaller and smaller. Yet expectations continue to be that you write programs. You’ll see younger programmers call themselves, “Developers” or even “Programmers” with a sense of pride. A few years later, they will change their own title to “Software Engineer” or “(Blank) Software Specialist”. A few years after that, they’ll start calling themselves something like “Computer Scientists”… even if they don’t have a degree. This is not a universal rule, just a trend. It’s an ego thing… they know their job is simple to them and they want to feel pride. (Note: I do this)
There are a ton of experiments I hope you can conduct. I want to know more about how a developer’s brain works. If you need anything, site construction, data analysis and storage, or even just a case study, let me know. I can’t emphasize enough how interested I am in your work.
–Collin
I’d like to help, too, if you require further subjects. I find this concept quite fascinating: it ties in very closely to what I’ve been saying for years.
I think it can be applied to any number of other things, too: for example, playing music – especially improvised music – is similar: if I program or if I play music, I can’t say exactly what my brain is doing while I’m doing it, but I can tell you that I’m not thinking either about what the code should look like or which notes are going to come out: in either case, it’s a lot more abstract than that, and 99% of it is taking place independently of either the computer or the musical instrument, as the case may be.
“Good programmers have good “minutia memory.” They remember the trees but forget where the forest is. Great programmers have good “gestalt memory.” They remember the forest, but ignore the trees. Legendary programmers have no memory what so ever. At least not in a reliable sense. Their memory has turned to creativity. They invent the problem, the solution, the syntax, and the grammar on the fly. ”
Yes. Yes! That’s it!
This reminds me of the way I program sometimes. Sometimes I simply write something without thinking of it and I sometimes believe that If I thought of it I wouldn’t be able to write what I just wrote.
It’s funny that sometimes I ask myself: “How did I just write this?”
And the funniest thing is that if I over think about it I sometimes think that I did it wrong and end up rewriting it incorrectly to later just revert to what it was initially unthoughtfully written.
Yes – I’ve done that, too. The gestalt stuff is usually right: something about how one didn’t second-guess one’s self while writing it, so all that annoying interference from the conscious mind doesn’t get in the way of one’s instinct…
Thank you Collin. I love the way you write about programming. When I put my brain back together after this project, I might well take up your offer of help for the next… whatever that might be!
I’ve given over a hundred job interviews for developers in the last 12 years, and have always asked at what age the candidate started coding. I’ve also given them the same test, which is basically a test whether they can tell me what some pseudo code does.
There seems to be this magical age range between 8 and 12: if they started before 13, they can do the test in under 20 minutes. If they started after, it takes them over 30 minutes. In fact, in those 12 years, only two people have taken between 20 and 30 minutes, and no-one who started coding before 13 has taken over 20 minutes. The test is excellent at predicting raw coding skill. (Of course it doesn’t test for all the other facets that make up a good hire.)
There’s definitely something happening at this age range. It’s great to see some formal work being done on it.
Funnily enough, I started looking at the language aspect because I wondered whether there was any scientific proof for Emma Mulqueeny’s argument that “year 8 is too late” to start programming (her blog on that is here: http://mulqueeny.wordpress.com/2011/08/10/year-8-is-too-late/). Her focus was on the gender gap – I had read about “critical periods” in language and wondered whether that would apply to coding (as a I read more, I became aware that the idea of critical periods is a bit shakier than I had realised – though the developing brain is more receptive to certain kinds of learning at certain times). I do think that there’s a good opportunity to research the optimum time to learn to code – hopefully someone will do that.
Ah, this old hokum again. It first heard it in high school, when some of my fellow nerds felt they should be given language credit for learning Pascal.
What a load of hogwash. Programming “languages”, and the algorithms created with them, are simply abstractions of mathematics. You’re no more bilingual-brained than your math professor. Just because groups A and B exhibit elevated ability at task X doesn’t say anything whatsoever about the mechanism by which A and B have elevated their abilities.
As a a young adult programmer I can say, this is cool. Never even thought of learning a high-level programming language as being bilingual. Technically the young adult programmers might know more than two languages. You learn English, then a programming language, and if its your career you learn 5-6 more over very short spans of time. So young adult programmers might know 5 or 7 languages.
Computers don’t understand programming languages. They understand bits. The only reason programming languages exist is to make the creation and maintenance of programs easier for people. That is, they exist so that it is easier for one programmer to communicate the intent of his program to another programmer. Thus, they are “human” languages by definition.
I think there is an inherent limitation in this whole “language” / programming paradigm. I think this issue is connected to a limited understanding of computer programming.
Computer “Languages” are tools for describing the data transformations. The key element here is that the languages are “tools”. They are very complex, highly abstract tools, but they are just tools.
A computer language is a tool for transforming data in the same way that a Clarinet is a tool for making music. Playing from sheet music is technically the transformation of one stream of data (notes on paper) into another stream of data (sound waves). In fact, we long ago trained computers to perform exactly this type of transformation.
The primary difference here is that the Clarinet is a rather specific & concrete tool, while the computer is a very generic and abstract tool.
If we are trying to measure linguistic capability by attempting to learn “Java” or “Python”, then we should equally be able to measure these capabilities by learning “Sheet Music” or “Mathematical Proofs”.
In fact, I have heard multiple pieces of anecdotal evidence for “Martial Artists and Musicians” being “naturally skilled” at programming. Many of the earliest programmers were in fact math profs and math nerds (i.e.: engineers).
I would love to see more research into linguistic capacities, but I think the line between “computer languages” and “human languages” is far too narrow to really be of any use. There is something operating at a higher level here, it’s greater than just “speaking java” and “speaking french”.
I think we need to point the research at this much bigger target.
I think it works on two levels: on one, a programming language is a tool used to solve a problem. But on another, it’s a psychological paradigm which functions in much the same way as a natural language: lifelong programmers develop a tendency to look at everything in their lives as a series of algorithms, and so on.
Programming has little or nothing to do with computer languages (see Dijkstra: “A Method of Programming”) – it’s all to do with mental process and how problems organize themselves in a programmer’s mind: by the time you get to the language and actually typing in a program, the work is already done: working with the computer is merely a method of taking one’s pontifications and formalizing them (and, of course, making concessions to the machine).
But the “art form”, if you like, is utterly mental. The same, as you point out, can be said of martial arts: the true art is in the mind, and the martial art itself is simply the physical manifestation of it. And since we think in language, it’s reasonable to draw a parallel between any conscious thought process and the language which best describes it: if that should happen to be a computer language, then so be it, but I have a feeling it’s a lot closer than you think.
You might be interested in these articles
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3449320/
http://psycnet.apa.org/journals/xhp/35/2/565/
(Can’t find a free version of the second, sorry.)
Very cool — thanks! I find it excellent that people are researching things like this – not least because at some point, knowing the answers to some of these questions is going to help us tremendously when technology reaches a level at which we can begin to contemplate building self-aware machines…
I am pretty sure that if you take this test with not – English speaker you will find out that most of programmers talk, or at least know, 2 languages, their own language and English.
This because you have to know English in order to program, there is no way around.
Now you have a bigger problem 😀
Cheers
I do not program for a living , however I have thought a lot about how language works.
Spoken Language uses verbally represented concepts.
Concepts are the link between Metaphysics: what exists, and Epistemology: our mental models of what exists.
Written language replaces the spoken sounds with images of characters which when combined and spoken duplicate the sounds of the words (concepts). Hence writing is sometimes referred to as speech made visible.
Computer “commands” are a series of concepts chained together.
This is similar to the way a sentence would be arranged in the spoken or written languages.
In spoken language the speaker is experiencing an internal stream of multi-sensory data, from which he is essentially reporting. His experiences are unique and one of a kind. The speech is serial and transitory lasting as long as the listener can hold them in his short term memory.
Written language of course is longer lasting and can be reviewed. As an aside … This is the time binding effect of writing. Writing is responsible for the amazing exponential learning ability of mankind. We are no longer limited to our own learning opportunities offered in our limited and puny ability to experience life.
Back to the topic at hand … The listener on the other hand has a harder task. He is receiving the spoken stream and must both hear, re-assemble and decode and each word and then guess/create the meaning intended by the speaker, and then he/she must decide how to convert the speech back into the sensory streams which the speaker was referring to as he spoke. Since the only clues he has as he re-assembles the words, are the particular words chosen by the speaker much of the task of assembling the raw meaning is guess work. This is because many words have multiple and assorted meanings.
In any case, after this decoding, he still has to re-code them back into his own sensory stream of experiences, and decode the meanings as it refers to himself and his life time of experiences. All this has to happen before he can then assemble his own response. This all has to happen in a short period of time so the communication can continue.
Computer commands are very different because the individual commands, use carefully defined un-ambiguous definitions, similar to numbers, which can only be understood in one way.
Spoken Languages on the other hand, require contextual positioning and may even need additional contextual information, in order to begin to understand the possible ranges of raw meanings.
So called higher level computer “languages”, were designed to talk to a compiler, which will in turn decode and re-write the commands for the machine software to understand, and then implement directly to the hardware.
So I guess you cannot really call the computer instructions a language. The computer instructions are special commands, which instruct the machines in such a precise way as to never be misunderstood.
If languages are a tool for communicating the details of existence, which are complex and ambiguous ideas which require contextual adjustment and supplementation, then computers cannot speak languages. Compared to the number of words, and the degree of ambiguity which people can accept and deal with, when it comes to computers, they can handle but a few commands and they must be un-ambiguous in order for the computer to execute them.
Computer instructions then, are similar to spoken languages, because the same words we use in our language, can be used to direct the computer.
Spoken languages however, all originate from our worldly experiences which we have personally converted into conceptual knowledge. These concepts are remembered and integrated along with their personal meanings, the sum of which is integrated as our total knowledge base. This becomes the predictive wisdom we use to function in the world every day.
But the commands we send to the computer are composed of only those precise concepts, which the computer has been taught to execute, which define the precise actions we wish the computer to perform.
The degree of ambiguity allowed in a computer command is limited to zero. There can be no mis-understanding.
Mankind’s spoken languages however, contain a high degree of ambiguity. This useful quirk morphs into the richness of our language. Oddly enough that ambiguity, is caused by my inability to compare and match my life’s experiences, and meanings, with yours. The variations which result from the differences in my understanding of your speech become those differences which makes a difference, and allows my understanding to both widen and narrow, the meanings you intended. And so perhaps, your lemons become my lemonade.
Does a rose by any other name ….
Useful knowledge is about prediction …. it’s what brains do …
That’s an interesting take on it: I’d like to comment on two bits:
Firstly: computer languages are time binding more than you’d think: this is because both hardware and software evolve over time. If we now look back at stuff which was written in the ’50s, it’s like reading Old English — but it’s useful to learn that stuff in the same way that for a linguist, it’s useful to learn dead or archaic languages. The concepts involved in archaic programming reflect the capacity of the machines around at the time, and they also (and this is where it’s important) reflect the mindsets of the programmers working at that time — to a linguist, working in terms of centuries, all this might seem arbitrary, but from the point of view of the evolution of computer languages, it’s just as relevant, since the evolution of programming paradigms has happened at a vastly accelerated rate in comparison to the evolution of spoken language: I can tell you that the concepts we use in programming today differ vastly from those used even twenty years ago: a thousand years in the evolution of the English language has been compressed, then, into fifty years in the evolution of programming.
Secondly: It’s important to distinguish between what gets typed into the computer and what goes on in the programmer’s mind. A computer system is a reflection of a real-world need, and that real-world need can be (and usually is) every bit as complex as any concept represented in spoken language. Programming is not about computer languages: it’s about conceptualizing any given real-world problem in terms of the processes which need to take place to solve it: the point at which one actually begins to generate a program is, in fact, the end-point of the process, not the start: by the time the program begins to be written, the programmer has already done all the work in terms of concept, algorithmic structure, design and so forth. Forget about logic, math, and all that: that’s merely a method of adjusting the real-world concepts with which the programmer has to deal into a representation which the computer can handle.
There is, in opposition to the way non-programmers seem to believe that it works, a great deal of ambiguity involved in this: there are many ways to solve a problem: to state it simplistically, the programmer will look for the most elegant and efficient solution, and sometimes, finding one requires a level of eloquence of thought not normally found in any thought process or language. The richness you mention is there in the programming environment: it simply takes a form which is unrecognizable to anyone except a programmer.
To state it rather less simplistically: programming is a true art form, but it has a limited audience (and it’s also an art form with right and wrong answers!): to appreciate the art and the beauty in a given algorithm, one must be able to generate such an algorithm one’s self – in other words, the “true” language being used for the thought process is not the computer language: it’s a think-only language with no spoken or written equivalent, and it is stunning in its richness. If I were to try to describe it to a non-programmer, I would probably compare it to the architectural design for a cathedral in more than three dimensions, expressed purely in terms of the way the sound echoes within. There would be no way to write down something like that, or to describe it verbally: in order to understand the thoughts of a person conceptualizing it, you’d have to be able to think that way. The language used to write the program, then, is a way to instruct the computer to reproduce the structures within the programmer’s mind, but to say that this language is the language the programmer uses to think is the same as saying that the computer understands the purpose of what it’s doing (or that a printing press understands the book it’s printing): both are fallacious. Another programmer will be able to understand what the original program was thinking by working backwards from the code he’s written and imagining what it does: with a bit of luck, he’ll be able to reform the original ten-dimensional think-only language in his own head.
As such, it would be a mistake to believe that the complexity of linguistic thought entertained by a programmer is limited by the number of commands a computer can understand: it’s not about that; it’s about the number and the structure of thoughts a programmer can have; in programming, as with any part of life, that number is unlimited — the actual computer language which eventually represents this is necessarily simple, but it’s not the commands which count: it’s the way the whole is phrased – the sentences and paragraphs, if you like.
People seem to be mistaken about the purpose of the study. There are a lot of “people who are good at coding are/are not also good at learning other spoken languages” comments. There are also a lot of links being drawn between the age at which a person starts coding, and how good a coder they will eventually be.
Both of these topics are interesting in the context of comparing spoken and programming languages.
However, for this study specifically, the relevant question is:
– Research has suggested that bilingual people perform faster than monolingual people at tasks requiring executive control.
– Does this hold true for monolingual programmers?
Hmmm… I don’t think you’ll find that programmers either bilingual or otherwise are much interested in executive control: in fact, most of us spend a non-negligible amount of time trying to avoid it…!!
“…perform faster at…tasks requiring executive control – that is, tasks involving the ability to pay attention to important information and ignore irrelevant information”
Why would programmers (or anyone) want to avoid this? Although the answer to that question is largely irrelevant. My point above stands whether you think programmers should be able to parse important information or not.
The study is still based around an comparison of performance between monoliguals and bilinguals, and a hypothesis that the same would hold true in the case of a programming language.
To be clear, I’m not saying anyone in this thread is wrong. Just wanted to point out that a lot of people here are drawing conclusions about programming languages (and language in general) that have no basis in this study.
Thats was architecture and scale is for.
Hmmm… Perhaps I parsed “executive control” incorrectly: I must say, though, having worked on any number of very large systems, that to a programmer who’s had to find enough tiny little show-stopping bugs, it becomes apparent after a while that there’s no such thing as irrelevant information: everything is as important as everything else. Of course, the trick is learning to hold everything in your head without ending up assigning priorities to one thing over another based on what the bit of you which still thinks like a person believes is “an important bit of the system” or what have you… It’d be interesting to know whether this flattening of priorities increases or decreases one’s ability to make these types of “executive” decisions… it certainly makes it easier to appreciate the other side of an argument, but one must be careful that it doesn’t lead to a state in which one ends up being unable to decide what to do.
I have been developing since the age of 9. Started with basic, pascal, C++, and the list goes on to today. I will say, the only language i learned was truth. That is what all computer based languages have in common. The real key is applying that to the real world in some meaning full way.
When I applied to College back in 1983 – one of the questions was “How many languages other than English are you fluent in?” I said “3” – which probably helped me getting into college. What it did not ask was what were those languages. Basic, Assembler 6502, and Pascal – would have been my answers.
I have read (here: http://www.cambridge.org/gb/knowledge/isbn/item1173982/?site_locale=en_GB) that during the 80s, some American universities did accept programming languages for their language requirement – but I haven’t found any more accounts of this. I’d like to though!
Well this is my comment to you, and sorry I dont have so much time to do other things now like red all this blog, and so on.
In winter time I have time to write some book about C++, so ther is a short hint for you, project abecedarian.
And in my oppionion, if you do something you are using your brain centeres, if they dont have ability to proccess that thing you are mostly unable to do what is reqired from you.
If you have done things before development of your brain is commpleted there is way that you can do something, otheriwse you are like retarded for that fild….
Some things can help later but noting major…
It is interesting fild for discucsion, and yes it might become geneticala so if large amount of people dont so something in next generation the kids of their kids might not be able to do anything…
In high school I have learned at least 3 languagess for programming and some human like languages.. and yes chemestri, math and some things like that have languages as well…
Hope you like it, it might be usefull to some people that would have time to conduct real experiments that will help people impruve they ways of learning…
in that way it might be simmilar case with me, but I speak german and english before 18 and also now I speak some Spanish
Synonyms and homonyms make a spoken language richer, and make a programming language confusing. I could see how people that are good at any one language (e.g. writers, public speakers) could be good at executive decisions, as they are good at managing lingual uncertainty (e.g. choosing between synonyms). Programmers are the opposite, as they use much more clearly defined definition, so there is much less ambiguity in the thought process.
[…] https://thecodingbrain.wordpress.com/2012/12/14/evidence-suggesting-that-young-computer-programmers-h… […]
You should look into adopting psycholinguistic methodologies if you want to pursue this line of research using previous work done on bilingualism. If you can find a way to adapt a masked-priming task or a visual-world paradigm task, you could test whether or not programmers proficient in a certain programming language experience inhibitory and/or facilitatory cross-linguistic effects (I use the term “cross-linguistic” with reservations). If found, it would have interesting implications for the way we conceptualize language. I don’t think it would speak for programming-language being language, per se (after all what is a language?), but rather it would suggest that we need to seriously reconsider our almost dogmatic assumption that language is something special and not just a cognitive ability like any other. That said, I have no idea how you might proceed to adapt these paradigms to investigate such a question. In any case, I think it would be worth considering.
Reblogged this on Laphiel.
I test computer software for a living and believe i can tell whether the code has been written to solve an issue or as an attempt to communicate. It usually comes from the success of the programming.
As a nearly 40 year old effectively monolingual man (I only know a smattering of words and phrases from a few other languages) I’m getting ready to learn GIS (masters degree), which will invariably entail learning some programming. Here’s to hoping I’m better at Python or other programming languages and “get it” compared to the years of French I took which never took hold, otherwise my plan and desire will not last long.
I am the furthest thing from a computer nerd, and I got through the entire entry and was interested:) Would be great to see replicated at larger scale with more controls. As a parent, the results interest me.
[…] Evidence suggesting that young computer programmers have "bilingual brains". […]
Reblogged this on STEM – ROBOTICS EDUCATION.
Well, I would be careful about calling it a “bilingual brain.” Knowing programming languages may help understand language structure and language rules better. That does not make one bilingual. It may help with conjugation and diagramming sentences in foreign languages, but you still need to learn the language first. How much, I don’t know. Some of the best programmers I have seen had very poor grammatical skills. As to being bilingual, I think have a music background is best. Music affects listening, reading, comprehension, ability to focus and be distracted at the same time, etc.
Personal opinion: This makes a for a good project in statistics or learning,but doesn’t hold any water.
Interesting study. I never have thought of it that way! For the past 3 years, I have been working with the top developers from https://www.staff.com . I must say though these guys are some of the most intelligent people I know. As they say, learning a new language increases the IQ. I must consider doing that too.
..Then I should enter the gates of the university again soon to become a computer programmer. 🙂
Thank you for sharing the results of the study!
Great piece of research…so what next? Are you going to keep doing research in this field when you finish your MSc / now that you’ve finished your MSc?
WOW! all I will say… I’m bilingual. English primary language. ASL second language. Oh! And tactile signing ASL… And we’ll Braille in the process of learning it is not a language but a channel for the blind to communicate in a language… None of this has anything to so with programming… I know a little of this as well as we learned in school and it is symbols and formulas I don’t consider it to be a language to communicate with people around you but to a computer. Just my view. Here’s why:
http://www.kodiakmylittlegrizzly.com
Reblogged this on Oyia Brown.
This is fascinating, and after my experience in my recent “Artificial Minds” class, I’m going to have to reblog this on my little blog. I actually think that it makes perfect sense, and my struggles with PROLOG seem to be a proof by failure… 😦 Hope for more like this from you in the future!
Reblogged this on Cowboys Don't Swim and commented:
The computer science nut in me loves this stuff.
Fascinating comments. I bombed Fortran back in the 80’s, dropping keypunched card bundles in a mainframe. Thank God (or whoever is responsible) for GUI. English (in particular) is such an amalgam of languages, with mutable rules for spelling, syntax,grammar. Is programming like that?
This may sound like a stupid question, but would you know if reading something like a musical score has the same effect? Very interesting, nice post.
Very interesting topic. Although I started programming about 7 or 8 years ago, if it helps you I was fluent before in 3 languages 🙂
The activity of debugging closely resembles the description of “executive control” tasks in that the developer is potentially awash in irrelevant data (the value of every variable on every line of the program is available) that needs to be heavily filtered to find the relevant nuggets of information (which variable has the wrong value, and why). Debugging could be throwing the results off by training their brains to be more effective at filtering. More experienced developers generally having the better debugging (filtering) skills. If you go forward with this like of research, it might be interesting to determine if that is indeed the case so its effects can be filtered from the results.
Very nice project. This remember me a phrase of Alan Turing about mathmatical theorems. He tells that solution is thinked and in general it is true, after the problem is to show this solution. It´s possible that in programming also occurs the same. Congratulations.
I would agree with you very much that learning a programming language engages the same brain areas as learning a new human language.
However, and perhaps more importantly, learning a PROGRAMMING language is a DISCIPLINE.
To write a decent program, you have to discipline your brain *far* more than you would need with any language (because, let’s face it, other people are forgiving but compilers or computers are not).
Do you have a measure for learning DISCIPLINE ?
That’s such a great point. I would argue (from experience) that learning to program slowly changes the way your brain works and instills the discipline that you required in order to learn to program… of course, there’s always that woolly bit in the first couple of years when you’re not quite there yet…!!
for that measurement thing with older people in my opinion it would be ok to use cat scan or something like that…
For people that still didn’t finish the brain development, it would be different but not to much…
I thing that there are scientific methods to do that nowdays!
[…] Evidence suggesting that young computer programmers have “bilingual brains” (thecodingbrain.wordpress.com) […]
For what it’s worth, I’ve seen some of Dan Sutton’s work and he can’t code for toffee.
[…] Evidence suggesting that young computer programmers have “bilingual brains” […]
It would be necessary to assess the activity of Broca’s region from the programmers while they do their magic. Programming sounds like an unnatural language to me, so there should be no correlation between natural and unnatural language learning. Empirical evidence is required from EEG and fMRI.
I haven’t read everything on this site yet, having only just found it, but I am pleased to find that someone has looked into this subject. What fascinates me as someone who programs in several languages, well at least two at the moment, is how easily I switch between the two even when often doing similar/identical things in both languages almost simultaneously and seldom make mistakes of the muddling up the languages sort. Although I speak a bit of several language other than my main language I am certainly not bi-lingual but programming languages feel instinctively as if they are processed in the brain in a very similar way to spoken and sign languages even if I do not know exactly how than is. I often wonder if, in the same way that computers use compilers and interpreters to convert programming languages into a common machine format whether the brain doesn’t do the same with spoke/sign languages, possibly the ‘real’ language of thought.
There are many similarities between human languages and computer languages (both have syntax and semantics, both express concepts), but ultimately I think it may be a mistake to conflate them too much.
Human languages all evolved from spoken languages and tend often to be vague and ambiguous (“eats shoots and leaves”). Clear meaning can be communicated despite hugely flawed grammar or spelling due to massive redundancy and cultural context.
None of those things are true with programming languages, which first of all are entirely textual or symbolic. Generally speaking, I’d say programming languages have much more in common with mathematical languages than with human languages.
Ok. Not aggre 100%. I did some Visual Basic 6 and dotNet, Delphy and C++ as well, and I could say that in some cases they also have ambigous. In spooken languages it is a good thing, but in programming languages it is not a good thing.
Can you cite an example of such ambiguity in programming languages? Generally, if the code is ambiguous, it won’t compile, because the compiler can’t handle ambiguity.
Yes I can. In C++ before C++11 there was problem when you create matrix with vectors, now it is fixed, and there might be one problem with not same way you declare the things. When you work with Delphy it is :=, also known as the way they put right side into left one. But then you get case of =: which is the same meaning and it is out there. Visual Basic 6, that is a story on its own. And yes for C++ there are situations that are not sure what is it, cbut it think it is caleld a sequential point or something. my english , in one wersion is one result on another version who knows. no way to tellllllllllll for sure!
I’m not certain I would consider those ambiguities, although I think I take your point. For a human looking at a bit of code, the human might find it ambiguous if they didn’t know exactly what compiler and version the code was written for.
I think that, for me, for code to be ambiguous, the same compiler would have to treat the same code as generating two different outputs. Compiled on Monday, Wednesday and Friday, you get one output, compiled on Tuesday, Thursday and the weekends, you get another. (Or do whatever splits you prefer: day/night, before/after lunch, whatever.)
I can’t say I’ve ever encountered a compiler that would treat a given bit of code as ambiguous and NOT generate an error.
☆ ☸☃☞ Merry Christmas! ☜ ★☼☽