Then I changed fields. Long, mostly unrelated story.
But I got to wondering why I'm no longer particularly interested in cognition, AI, or related subfields ('cogsci' for short). I think it's a combination of several factors.
- I had lousy teachers for my introductory psychology and neuroscience classes, and a lousy research experience in cognitive linguistics.
This is not cogsci's fault, but it is a plain fact of my experience and a large part of the reason why I chose to change majors.
- The field is immature; it hasn't had its Watson and Crick.
I consider biology a mature field because so much of it comes down to simply "DNA makes RNA, RNA makes proteins, proteins build the cell's traits and perform its behaviors (and affect DNA)". This was all elucidated in its basic form many years ago. It provides a framework for understanding new experimental results and generating new questions. It is a Coherent Overarching Theory. Cogsci does not appear to have a Coherent Overarching Theory yet, though it seems to have plenty of contenders. I should note that I would expect the Theory Of Cogsci to be significantly more complex than the Theory Of Biology, and take a much longer time to discover.
(I think this point actually applies to basically all of the brain-related fields, although you could argue that basic neuroscience had its Watson-and-Crick moment when people figured out the structure and function of the neuron. I suppose it's also hard to reconcile my assertion that brain science is immature with the current, legitimately amazing progress in brain-machine interfaces -- but I don't know how well we actually understand how they work, or how well we can distinguish them from magic. Anyone who actually follows the news in that area, care to enlighten?)
- I'm too lazy to have philosophical opinions.
Perhaps in a few years I will have the energy and the intellectual horsepower to grapple with the philosophical side of cogsci. But as it is, I can only stomach it in the kind of small doses Douglas Hofstadter likes to sprinkle into his books about poetry. I have always had a hard time motivating myself to think hard about philosophical arguments, figure out their axioms, pick holes in them, etc. etc. -- or, most of the time, to even figure out whether I actually agree with them or if I'm just reading along. I'm much more comfortable playing with cells, or even models of cells where the error is quantifiable.
There are enough cognitive/AI people amongst my acquaintance that I expect I'll hear some eloquent defenses -- and I'll be glad to hear them, I really will. I'm just happy the field is in such competent hands. If you put me in charge of AI, after all, we'd never discover anything.