Monday, September 29, 2008

Open Courseware is not the be-all and end-all

MIT's Open Courseware gets a lot of press. And a lot of it is well deserved -- it's fairly comprehensive, and does a good job managing the technical side of throwing vast amounts of material up on the internets (hooray for PDF). But it's not everything.

The past few days I've been stressing over various topics from my genetics class, mostly yeast tetrad analysis. Going to lecture hardly helps; for this class, it's been generally true that the more confusing topics are very poorly explained. Usually the textbook does a good job of explaining things, but it doesn't cover yeast tetrads hardly at all. And I have a test in two days.

(Yeast tetrad analysis is pretty interesting, by the way. I might write a post about it if I ever actually understand it.)

Turning to OCW didn't help, because more or less the same professors have been teaching the class for N years, so the lecture notes are exactly the same. I can get practice problems (old homeworks and exams), which are very useful once I actually understand something, but not in this case.

So I turned to Google. And lo and behold, googling {yeast tetrad analysis} brings up several universities' intro-genetics webpages about yeast tetrad analysis! With clear, coherent explanations and well-drawn diagrams! UC Berkeley (pdf), Indiana U (pdf), U of Saskatchewan, and U of Rochester, just to name a few.

(Even better, some of those are straight-up webpages. Better than everything being PDF, although not better than everything being .doc, which sometimes happens.)

I often hear, second or third hand, how people from other schools use MIT OCW all the time. And I'm not denying that OCW is awesome. But it's not perfect, and there are other places to go if the OCW explanation of a topic happens to be consistently lousy.

Thursday, August 21, 2008

To be intellectually uncomfortable

From one of Uncertain Principles' link dumps comes this piece from Inside Higher Ed: Tolerant Faculty, Intolerant Students. Apparently, despite the myth of the politically stubborn professor forcing his views on poor openminded students, the truth is more like students haranguing each other and professors doing fairly well at not inappropriately expressing political views. (Well, at least in Georgia. The study might have turned out differently if done on CRAZY CALIFORNIAN COLLEGES!!!!1!11!one.)

(I should note that I have only been in actual political discourse once all year, and it was with a student on my hall who fervently supported Ron Paul. Perfectly friendly, civil discussion; I didn't enjoy it because I don't enjoy political discourse, period. I haven't seen any politics in classes, probably because I haven't taken any classes where it's even marginally relevant. Coulomb's Law? Politicize that, bitches.)

The thing that really struck me about this article, though, was one particular quote: universities are a place to go to feel uncomfortable intellectually. Obviously, the article means this with regard to one's political beliefs -- but it's applicable in a much larger sense.

When I encounter something I can't understand, something really confusing, I get anxious and upset. This happens a lot because I'm at a difficult school, and it's to be expected that I will encounter confusing things. But why should I get upset? This is an artifact of tying understanding to Success and Achievement (TM), rather than understanding for its own sake. Even though I went to one of those hippie elementary schools where there are no grades, I suppose I still have kind of a hang-up about 'succeeding' versus being way outside of my comfort zone.

I really admire people who respond to confusing things by first feeling humble, and then feeling happy that they have another puzzle to solve. People who get suspicious if it looks like they understand everything. It's not that I think such people never get frustrated, but that seems like a highly useful mindset for a scientist or intellectual, much more so than getting upset and frustrated. (Certainly, given the complexity of the real world, if you think you understand everything about X, you're almost certainly wrong.) This mindset should also make for a happier person overall. Apart from that, it feels like the right thing to do -- it appeals to the idealist in me.

Universities are a place to go to feel uncomfortable intellectually.

The world is a place to go to feel uncomfortable intellectually!

I'm going to try and embrace this mindset, embrace this mantra. A form of "intellectual asceticism, perhaps?" Hopefully I can change my knee-jerk reaction to the difficult and the confusing.

Friday, August 15, 2008

New Look & Feel!

So I changed the blog template.

The main thing that bothered me about the old template was that the main text column, the most important part, was of fixed width. Which is annoying if you have a wide monitor. With this new template, go ahead and change the width of your browser window, and the Dendritic Arbor will change right along with you. We strive to provide a customizable reader experience. We strive to be adaptable, plastic, and dynamic, like actual dendritic arbors.

(It also bothered me that approximately 80,000,000,000 other people are using the old template. Then again, this is Blogger, what was I expecting, hmm?)

I chose this particular theme via the complex and deliberate heuristic of seeing which one looked the most like the one Larry Moran picked for Sandwalk. I've been reading his series on protein structure and admiring Sandwalk's look-and-feel. I guess this is the exact same theme as Sandwalk has, just with different colors.

By the way, the protein structure series is excellent and here are links to all its items. There isn't really a 'best' order to read them in, but I've put them in an order that sort of makes sense.

Evolution and Variation in Folded Proteins
Levels of Protein Structure
The Alpha Helix
Beta Strands and Beta Sheets
Loops and Turns
Examples of Protein Structure

Tuesday, August 12, 2008

Why I am a hard agnostic

I believe that the nature of deity is both unknown and unknowable (by humans).

As I understand it, "the nature of deity is unknown (and may or may not be knowable)" equals 'soft' or 'weak' agnosticism, and my position is 'hard' or 'strong' agnosticism. I was a soft agnostic up until relatively recently.

(Quick aside: that terminology bothers me because it's possible to be a strong weak agnostic -- i.e. you believe very strongly in the soft-agnostic position -- or a hard soft agnostic, or also a soft hard agnostic ("I'm pretty sure the nature of deity is unknowable as well as unknown"). Not sure what would be good alternative terms, though. Also, hard agnosticsim != militant atheism.)

I've been fairly sure for many years that the nature of deity is unknown, what with implausible theologies and the power of science, etc etc, blah blah woof woof -- this argument has been made, and people have responded to it, a million times.

There are two possible arguments, I think, for why we cannot know the nature of deity (as opposed to merely "we don't know it, but we might figure it out"). One is the "perverse liar god" argument, and it's sort of tongue-in-cheek. If there's an omnipotent deity, then it could make itself unknowable, never mind why it would want to. It could even put on an understandable mask while leaving its true nature hidden.

The other possible argument is "fundamental limits of human understanding". The human brain may be the most amazingly complex awesome thing in the universe, capable of conceiving arbitrarily complex and abstract ideas... but is it capable of understanding something omnipotent, omnipresent, omni-everything, possibly vaster than the universe? I highly doubt it, given how hard it is to truly grasp (say) the sheer size of a galaxy. (And I do mean truly grasp. Saying "oh, it's so many frillion light years across" does not count.) A galaxy has got to be a lot smaller than an omni-deity.

You'll note I said "fundamental limits of human understanding", not "fundamental limits of any understanding". I don't doubt that an omnipotent deity could understand itself perfectly well. It's perfectly possible that there are forms of understanding out there that are orders of magnitude more powerful than our piddling little thoughts, or maybe even have a completely different basis -- call them "super-understanding". But given that humans are limited to human understanding, can we understand super-understanding? I doubt it.

(For a very readable, math-flavored analogy to the topic of super-understanding, see Scott Aaronson's fabulous essay Who Can Name the Bigger Number?, which you should read anyway. He discusses Turing machines and Super-(Super-Super-...)Turing machines, among many other fascinating topics.)

And by the way, I have high standards for what I'm willing to call understanding deity. I'm not willing to settle for a grammatical string of English words, or a bunch of math, or whatever, that describes deity perfectly. I am perfectly capable of 'understanding' (sneer quotes!) a sentence like "God is a spirit, infinite, eternal and unchangeable, in His being, wisdom, power, holiness, justice, goodness, and truth", in that I know what all its words mean and I can parse the sentence, because that is perfectly fine English. But can I fully appreciate the impact of a spirit, infinite, eternal and unchangeable, in its being, wisdom, power, etc? Can my body generate a sufficiently large visceral/emotional reaction to the import of that idea, or will it run out of neurotransmitters first? Can my humble heart and mind contain the infinite? I think not.

Now, since I'm feeling a bit Hofstadterian, try this one on for size: "God is a spirit, infinite, eternal and unchangeable, in His being, wisdom, power, holiness, justice, goodness, truth, abstraction, recursion, subtlety, and inscrutability." Can this new, improved sentence give us a true, visceral understanding of an omni-deity? It's certainly an improvement.

(God-sentence is from Chapter 7 of Anne of Green Gables. I love that book.)

Monday, August 11, 2008

The Right Way to Respond to Getting Hacked / Cracked

Wired: Boston Subway Officials Sue to Stop DefCon Talk on Fare Card Hacks

Three MIT students have figured out how to hack the fare card system used on the Boston-area public transit ("the T"), to ride for free, add money to a stored-value card, and other such things. They cracked both the RFID card and the magnetic-stripe card. Here is a PDF of their presentation.

They were planning to give a talk on their discoveries at DefCon, the big annual hacker/cracker convention. The Massachusetts Bay Transit Authority (MBTA), which runs the T, sued them to prevent them from giving the talk. The talk will not be given. The students' faculty advisor, Dr. Ron Rivest, is being given a hard time.

The MBTA are going about this exactly the wrong way! (Although their response is understandable.)

Security systems are only trustable if they are thoroughly tested in actual use -- including normal users, hackers, and crackers. No matter how hard the MBTA try to hush up their security flaw, it will not make their security flaw magically go away. They should fix the flaw. Yes, it will cost money, but my sympathy is limited.

It is basically a given that many people will hear about this security flaw, whether or not there is a DefCon talk about it. Hello, Internet. But if people hear about this flaw only through underground/unofficial channels, what impression does that give them of the MBTA? It gives the impression that they either don't know about the flaw, or know about it and aren't doing anything about it. It gives the impression that the people who run all of Boston's public transit are a bunch of incompetents.

Moreover, by stage-whispering "Shh! Don't tell anyone about this!", they're also saying "Hey, this is significant! And leaves us very vulnerable!"

(By the way, if their goal is to decrease the number of people who hear about their security flaw, they have failed dramatically, because it's getting all over the news now. For example, I don't know anything about the presentations at DefCon, but now I know there's a security flaw in the subway system I use all the time, precisely because they're suing my classmates over it.)

What should the MBTA be doing? Shouting this as loudly as possible: "Yes, we have a security flaw! Thank you guys so much for pointing it out! We're working around the clock to fix it!" This would give the impression that the MBTA is run by intelligent people who can face reality instead of frantically trying to make it magically disappear. People would realize that the MBTA is serious about fare security.

The "Fare Security: SERIOUS BUSINESS" attitude would help decrease all kinds of subway sleaziness, including people who break fare security by the super-advanced hack of jumping over the fare gates. Not to mention things like littering, graffiti, and panhandling (none of which are huge problems on the T, I'm glad to say!). If I see someone fare-jumping, it makes me think the subway system is for shit anyway, so what does it matter if I stick gum to my seat?

And besides, the MBTA's anti-hacker attitude will just annoy crackers and make them more inclined to crack the T fares. Being realistic, and trumpeting increased security, will make crackers less inclined to attack the T.

Come on, MBTA. Have the grace to admit you've been hacked, instead of going into denial. Fix the vulnerability. Show us all you're serious about fare security. In fact, why don't you talk civilly to the people who hacked you? I'm sure they could help you build a better system.

...And, because I need some levity, here's a related episode in the adventures of Domo-kun. Hooray, cute pictures.

Sunday, August 10, 2008

Some programming resource reviews

Since I've been teaching myself, and especially since I've been flirting with several different sources rather than settling into a steady relationship with any particular one, I thought I'd share my impressions.

So you know where I'm coming from: I took AP Computer Science AB (Java) in junior year of high school. Then in senior year we had a semi-experimental Scheme class based on the classic SICP (see below), but nowhere near as hardcore. Because Java and Scheme are so different, and because I forgot a lot of Java over the intervening summer, this amounted to me learning to program two separate times with very little cross-contamination.

After that, I didn't write so much as a line of code for my whole first year at MIT. (Yeah, I know -- how could I?!?, etc.) But somehow a lot of vague programming ideas seem to have stuck in my brain, making it much easier to (1) learn a new language, and (2) re-learn programming in general. So I'm somewhere between 'rank beginner' and 'person who wants to pick up a second language after their first CS class'. This will affect my assessment of resources.

>> Learning Python, Mark Lutz & David Ascher

...Comprehensive, I suppose. It's the O'Reilly book, which gives it lots of reputation points. It goes into a lot of detail about basically everything (I'm reading it online, but the dead-tree edition is quite hefty). Tables of built-in integer and string functions, that sort of thing. Even -- get this -- more than one chapter on getting Python to run on your system, choice of IDE, and how to run programs you write from a variety of situations. This makes it surprisingly useful as a reference, for a book with the word "Learning" in its title.

The thing that bothers me about this book is that it can't seem to decide where it's aiming. It's definitely a programming book, not a computer science book; but it's in between "dense comprehensive tutorial/reference for people who already know what they're doing" and "learn programming from scratch using Python". For example, it explains the ideas of iteration and recursion, gives very basic examples, and discusses the general sort of situation in which you might want to use them -- and then a minute later it hits you with a lot of jargon and dense stuff. Curious approach.

Perhaps some of the confusion is because the authors are trying to take a "quick survey first, then dig into the details" approach on each section, and their definition of "section" is too short, leading to a confusing rapid alternation between the high view and the details.

I'm only up to the section on classes and OOP so far, because I've started to run into stuff I never learned or have forgotten; so I can't comment on the GUI section or anything like that, but I'm sure everything else is quite comprehensive.

>> Structure and Interpretation of Computer Programs, Abelson & Sussman

Ahh, the classics. Especially the classics that are available online, for free, in full, under Creative Commons, and not in a really irritating interface!

This is a hardcore nerdy book. It doesn't teach you computer science, it ties you down and throws computer science at you thick and fast, but in a totally awesome, non-evil way. It's really not a programming book, either -- the authors go out of their way to teach you the absolute bare minimum of Scheme syntax to handle the concepts they're presenting. This is kind of irritating if your goal is to be able to do practical things quickly (ha). But if your goal is to achieve nirvana, you could do a lot worse than this book.

Read the footnotes. They are snarky, and competently so, unlike the lame attempts at humor you see in most textbooks.

>> Teach Yourself Scheme in Fixnum Days, Dorai Sitaram

This is a straight-up no-frills tutorial for people who already know how to program, more or less. Reading this after having taken a SICP class was pretty interesting: I kept going "Hey! That piece of syntax makes it really easy to do X thing we were always beating our heads over in class, because SICP HATES SYNTAX OR SOMETHING." Hooray for actually doing things with Scheme. I'd gotten the (not entirely wrong) impression that Scheme was good for nothing but theoretical interest and formalism with a dash of AI.

I'm not sure whether to recommend reading this along with SICP. On the one hand, I tried it and got really confused. Part of the charm of SICP's syntax starvation is that it actually helps you figure out how to do things; they simply refrain from teaching you the four other ways to do it. On the other hand, Scheme can actually do stuff that's not all abstract and theoretical and weird, and this teaches you how to do that.

>> [pdf] How To Think Like A Computer Scientist: Learning with Python, Downey, Elkner, and Myers.

This, I think, would be a good book for a (dedicated) beginner. It starts right at the beginning with the very fundamentals, and keeps a great balance between new concepts and new syntax. I haven't gotten very far at all in this book, mostly because I've spent so much time on the O'Reilly Learning Python book, but I like it very much so far.

Plus, how can you say no to a book that's written in LaTeX? Seriously, now.

>> Project Euler

Oddly enough, none of the above books have enough exercises in them. Teach Yourself Scheme has none at all. Learning Python only has a few, once every five chapters or so, and they're all about catching tiny nitpicks that trip you up. SICP has plenty but they're geared toward teaching CS principles, not coding practice. How To Think Like A Computer Scientist has good ones but not many of them. I googled around but the only thing I found that wasn't too advanced was FizzBuzz, which wasn't enough.

Enter Project Euler. It's a site full of math puzzles, varying in difficulty from "ludicrously easy" to "frackin' impossible", and set up such that you basically have to solve them by programming. Things like "find the ten-thousandth prime number", too big to be solved by hand. Excellent!

The site is obviously run by people who know what they're doing -- all the problems are very unambiguously stated, with examples, and the site keeps your score for you. Once you've solved a problem, you can look at the discussion thread where other people post their code and discuss alternative methods. The other people, by the way, are brilliant, and come up with all these elegant methods and neat optimizations for you to imitate.

Saturday, August 9, 2008

Look Mom, Tags! And a Coding Trance!

[Before I start, let me admit that I am still very much a programming n00b. Everything that excites me is probably old hat to the majority of the coding population.]

I like tags very much. They are a million times better than folders. 'nuff said.

I've been writing a tagging system. It's nothing special -- just something that pairs up items and tags, and can do some of the same kinds of lookup that the tagging systems on blogs can do. Have been wanting to do this for quite some time, so now I feel all accomplished :D

I haven't finished testing the thing yet, so there are probably a lot more mistakes hiding in there, but most of the ones I've caught so far have been pretty similar. I tend to confuse the actual tag object with the tag's name, and refer to one when I should refer to the other -- kinda like confusing the phrase "scientists are people too" with the tool that lets you do this. Same thing with confusing tagged items and the objects they contain. These are pretty easy to fix.

There are a couple more functions I want to write -- eg, I haven't got a thing that lets you organize the tags in order of frequency of usage (as they are in the sidebar). But I've got the majority of what I wanted. (And no, that doesn't include a GUI. This is strictly text-based, Python-command-line stuff.)

I guess I'm kind of excited because, back when I was thinking about teaching myself to program again, a tagging system seemed like kind of a big challenge -- one I could do relatively quickly, but a challenge nonetheless. And now I'm about 90% of the way there. It's astonishing how fast you can learn to program when you've already learned to program twice before! ^^

This is also the longest block of sitting-and-writing-code time that I've spent in quite a while. I seemed to fall into a sort of trance -- very focused, very losing-track-of-time. This is an interesting mindstate (not to mention a very productive one!), and I'd like to get into it more often. (It helped that some people over on the XKCD forum turned me on to the streaming lyricless music from Blue Mars, which bills itself as "music for space travelers". Despite the word "mars" in their title, I think they really mean "music for headspace travelers". I find it quite relaxing and focusing.)

Wednesday, July 30, 2008


I just watched the History Channel's program on the evolution of eyes. Overall it was pretty meh, but there were a couple of interesting parts.
ETA: I didn't liveblog it or take notes, so there are things I forgot. Memory is fallible. I'm human.

My favorite was in the first segment -- the experiments on the jellyfish in the tank. The researchers got a jellyfish with primitive eyespots and shone different colors of light into its tank to see how it reacted. Green light made it "relax", stop swimming, and sink to the bottom of the tank. Purple light made it start swimming really fast, and for some reason it shortened up its tentacles by a factor of 2 or 3. How do they do that? And why do they do that? Is it for speed (shorter tentacles = more drag)?

I was interrupted by a phone call and missed most of the segment on trilobite eyes. My brother, who was watching, informs me that trilobite eyes are made of calcite. Huh.

The segment on the tapetum lucidum (shiny layer in the back of some nocturnal predators' eyes that makes them look all glowy and creepy) was quite good, relative to the other sections. Not so much with the ferocious dinosaur predation or the "T3H STRUGGLE FOR SURVIVAL ZOMGZ", lots of nice creepy glowy-eyed panther shots.

Dragonflies apparently have tens of thousands of lenses in their compound eyes, and have a visual "processing speed" (define please??) ~5x that of humans. Badass.

There was a weirdly long segment of dinosaur obsession. Whenever I see this kind of sensationalism it makes me sad, but I suppose it's only to be expected anymore. The epic, ceaseless struggle for survival! Eat or be eaten! And I imagine, sometime when this episode was being planned, some editor was all "we gotta have dinosaurs in all our nature programs!". Far too much time spent on dinosaurs in a program about eyes. They should have cut this by 90% and spliced in some material on cephalopod eyes, or the design flaws in the human eye, or more than perfunctory detail on intermediate stages between "patch of light-sensitive cells" and "fully evolved eyeball". All of which were either mentioned briefly or omitted altogether.

The other thing that really bothered me about this program was some of the things they felt necessary to explain: Humans are mammals. Vertebrates include reptiles, mammals, and birds. How can kids past kindergarten not know this stuff?

Also -- ok, I would really have liked to see a mention of how wildly different species use a lot of the same genes to control development, and how this interacts with (convergent) (eye) evolution, but this is a ridiculous thing to hope for, given the level of the program. Why are they showing kids-level programming at 10 pm?

Tuesday, July 22, 2008


Here's how I make good passwords:

Pick a short keyword that's easy to type -- say, 'left'. Replace some letters with numbers (l3ft) and, for good measure, capitalize some stuff (l3Ft). Now, every time you need to make a password for something, take the name of the thing and stick your keyword on to it somewhere: l3Ftgmail.

This part is fairly well-known (although, still, not enough people use it!!!). What's fun is to make like a linguist and treat your keyword as a real affix. You can affix it anywhere to the name of whatever you need a password for, not just at the beginning or end (prefix or suffix). You can also infix it (gl3Ftmail or gmail3Ftl), or circumfix it (l3gmailFt).

This method doesn't completely specify your password. Sometimes you need to use an acronym for the service instead of the full name (su, or supon, for StumbleUpon), to keep the password from being too long. And sometimes I can't remember whether my YouTube password uses yt, ytube, or youtube. But it doesn't matter, because there are only a few possibilities I need to guess. What's more, this makes it easy to change passwords and still remember them fairly easily -- just move your keyword from prefix to suffix to circumfix to infix...

Sunday, July 20, 2008

Forest-path connectivity

There's a decent-sized blob of public forested space right behind my house. It's crisscrossed with unpaved bike trails -- and a good thing too, because it's difficult to walk through the forest off the trail without getting caught in dense coyote-scrub bushes or masses of poison oak. There's a little pond. From several points you can get a lovely panoramic view of the San Francisco Bay and/or the rest of the forest. All in all, it's quite the concentration of nature for someplace in the middle of a suburb.

One of my mini-projects for the summer is to learn my way around the forest trails. There are maps posted at the trailheads, but I'm challenging myself to use them as little as possible and to figure out the connectivity by myself, for two reasons. One, I'm too lazy to bother making myself a copy. Two, there are a number of "unofficial" trails that are perfectly walkable but aren't on the map. (I suppose the unofficial trails get made when people start riding their bikes down deer tracks or dry creek beds.) So I get to combine physical exercise, nature, and mental exercise. Three for the price of one!

Another point of interest: the forest is in a valley, so it's hard to find a piece of land that isn't steeply sloped (especially once you get away from the surrounding houses). Also, for some reason, the soil is such that trees fall over a lot, but often survive. So the whole place is full of trees with interesting geometries. There are horizontal trees. There are trees that form arches over creek beds -- they don't just sort of bend toward the creek, they actually start growing downward once they get to the other side, so you could almost climb up onto them from either end. Today, I saw a tree that, I kid you not, made four ninety-degree turns, each a foot or two apart.

Timprov, a friend of mine who's a pretty good photographer, is coming over tomorrow and we're going to poke around the woods and take photos of stuff. Should be fun.

Thursday, July 17, 2008

It was the Summer of the Library Books. San Mateo County glowed red...

[The title is an allusion to the pseudo-medieval text on the cover of one of the Redwall books: "It was the Summer of the Late Rose. Mossflower Country shimmered...]

"A red sun rises. Ash has been spilled this night." -- all right, that was worse. I couldn't resist making another allusion, though. All the wildfires in Northern California have been sending so much smoke into the air that, many days, the sky has been silvered over and the sun has turned red long before setting. (I don't remember exactly how long before setting, but the sun's been red while still five or more diameters above the horizon.) It's actually a beautiful sight, for all it's a symptom of fire and destruction elsewhere, and for all the smoke is causing trouble for people with asthma (especially in cities closer to the actual fires).

Other than that, though, mostly the weather has been fantastic. There have only been two days so far that I'd call "uncomfortably hot". Take that, friends back in Boston. It doesn't even matter that my house doesn't have air conditioning! I don't need it!

For the first time in four years, I'm not doing any kind of science research internship thing over the summer. This is mostly because I was too lazy to get myself a lab job, although it's lovely to have all this spare time. And since I have almost nothing but spare time, I've been lavishing attention on my book heap. I've taken an inordinate amount of books out of the library, bought a few from Barnes & Noble, and pulled several out of my family's rather large collection. I even have time to reread books! What a luxury.

(Because I've started to get a little tired of just reading, I'm also teaching myself Scheme in fixnum days.)

Here are some lists of books, with comments. I may write up more details on some of the sciency ones.
(Lists are not in any kind of order, and are not exhaustive (I'll probably remember more and add them later).)

  • The Hobbit, JRR Tolkien. This is a lot more fun than I remember it being when I read it as a kid. Tolkien's prose style is maybe a little difficult for kids -- but this time around I kept wanting to grab a bunch of random kids and read aloud the fun parts, really perform them. On a totally unrelated note, it's also fun to reread The Hobbit after knowing Lord of the Rings; Tolkien is constantly hinting at things. For example, it makes so much more sense that the Necromancer could cause Mirkwood to be nasty and gross if you know that the Necromancer is actually Sauron. And, there are the Dwarves charging into battle yelling "Moria!".

  • This is Your Brain on Music, Daniel J. Levitin. This one had me itching to make funny waveforms to screw around with my auditory perception, and to listen to the pieces of music he mentions with a totally different ear. Unfortunately, I was stuck on an airplane. Quite a ride. I took note of several small things Levitin said about his approach to his area of research, and am planning to write them up.

  • When You Catch an Adjective, Kill It: The Parts of Speech For Better And/Or Worse. Ben Yagoda. This one's good for dipping into because each of the chapters (Adjective, Adverb, Article, Conjunction, Interjection, Noun, Preposition, Pronoun, Verb) is relatively short and stands alone. Yagoda writes as a serious linguist, but not a dry one: he clearly enjoys language for its own sake, as any geek ought to. This is not a writing-advice or prescriptivist book (although it does contain some tidbits of writing advice).

  • How To Dunk a Donut: The Science of Everyday Life, Len Fisher. This is an interesting popular-science book. Rather than describing one small area for the public, Fisher describes applying a "scientific" approach to random topics like "Why do cookies crumble when you dunk them in milk?". I'm somewhat wary, because some of his projects seem to have been solicited by companies who want publicity for their cookies or whatever. Similarly, some of the "scientificness" of his approach seems to consist of wrapping things in numbers and graphs. A very laudable goal, but in some places the description is pretty dense for a book with that goal.

  • Proust Was A Neuroscientist, Jonah Lehrer. This was really well-written and engaging. Lehrer's idea is to show how various artists/musicians/writers (mostly of a certain Paris avant-garde club, it seems) anticipated recent developments in neuro / cognitive science. For example, Walt Whitman's "the body includes and is the soul" lines, and embodied cognition. I thoroughly enjoyed the arts-description parts and the science-description parts. And sometimes the connection between the two was strong and clear, as in the chapter on Proust; not so much in the chapter on, say, Stravinsky (that one felt forced). Still, eminently worth a read.

  • The Time Machine, H.G. Wells. Oh boy, did I ever get the wrong impression from the movie version of this. The book is a lot more realistic and a lot more... quietly horrifying? I don't think it's at all plausible that class separation will lead to humans splitting into two species, but it's thrillingly creepy to imagine humans evolving into either the Eloi or the Morlocks. I guess this one is halfway between dystopia and SF. (And I'm still a fan of the virtual-encyclopedia guy from the movie.)

  • Across the Wall, Garth Nix. I'm a big fan of the Abhorsen trilogy, and the first half of Across the Wall is a shorter story that's part of the canon. The rest of the book is a bunch of short stories outside the canon that I didn't find all that interesting. Nix writes young-adult stuff; Abhorsen aged well, IMO.

  • Autobiography of Benjamin Franklin. Wow! Franklin gets short-shrifted among the Founding Fathers because he didn't do "hero" stuff like command armies or ride at midnight through Massachusetts, because he doesn't look all that dignified and because the things he did do were quietly behind-the-scenes and/or sounded silly (kite, anyone?). But his autobiography is full of all kinds of interesting things, like the brilliant way he basically tricked the people and the legislature into funding a new hospital. And it's packed to the gills with advice that's a little antiquated, but still good. This is a real self-help book, not some lame thing full of whining and words like "self-actualization". I would have liked to hear more about his experiments with electricity.

In Progress
  • Le Ton beau de Marot: In Praise of the Music of Language, Douglas Hofstadter. What can I say? This is a book about translation -- but it's also a book by Hofstadter, which means it's a book about the idea of translation stretched and twisted and abstracted and applied to all different kinds of things, and at the same time it's also a book about poetry and music and elegance and math and computer science and AI and love. The book describes itself as something of a memory of Hofstadter's wife, Carol, and it's touching to read it as the world's longest, most elaborate love letter.

  • Gödel, Escher, Bach: An Eternal Golden Braid, Douglas Hofstadter. This one is also about music and elegance and poetry, but with a much larger dose of logic and math and AI and symmetry and abstraction. It's more work to get through, but it's so worth it.

  • On Food and Cooking: The Science and Lore of the Kitchen, Harold McGee. A really well-written guide to food, "scientific" in that it describes foods using scientific terms, not "scientific" in the sense of nutrition-reductionism. Sort of a Hacker's Guide to Culinary Experimentation.

On Deck
  • I Am A Strange Loop, Douglas Hofstadter

  • Diamond Age, or, A Young Lady's Illustrated Primer, Neal Stephenson

  • Vaccine, Arthur Allen

  • Jonathan Strange & Mr. Norrell, Susanna Clarke

  • On Intelligence, Jeff Hawkins

  • Dante's Inferno (anybody got recommendations about which translator to read?)

Wednesday, July 16, 2008

Perverse, Ugly, Terrible Beauty

Neurophilosophy writes about amyloid plaques and Alzheimer's, showcasing this really interesting 3D rendering of the plaque's constituent protein fibrils -- larger picture with the original article at Discover. The article and the post are really informative and you should go read them because I'm not going to address their content. (Gasp!)

My first reaction on seeing the image was, "How strangely beautiful". Even though this is a picture of a prime suspect in an absolutely horrific disease. Even though it's got a rather menacing fire-and-brimstone color scheme. Even though I shudder at the idea of these nasty little fibrils snaking their way through my brain, withering neurons like the Goo of Death from Princess Mononoke.

It's well established that good, hardworking, well-oiled biology is a joy to behold (if you have the right mindset). Listen to PZ Myers rhapsodize about the time he got a close-up look inside his hand. Read Dr. Sidney Schwab's eulogies to the body unmarred, and to the regal liver and warm, welcoming intestine. You've all seen The Inner Life of the Cell; watch it again and marvel.

It's also pretty well established that diseased, shattered, out-of-control biology is ugly, ugly, ugly. Hear Dr. Schwab, again, on how injuries and cancer ravage and ruin the anatomy that was so lovely. And who hasn't shuddered (inwardly) at the sight of scabs and puckered scars?

But somehow, I find there's a genuine (albeit perverse, ugly, terrible) beauty to diseases and such evil things. In the same way that it's interesting to watch flames blacken and consume a sheet of paper, it's interesting to imagine a cancer burning its way through a tissue. There's an elegance to the way viruses hijack and pervert cells to their own nefarious ends. And so on. I'm not saying that I think diseases are a good thing, or anything that causes pain/death is "nice" or "pretty"; far from it. But can't you see the grace, the sweeping lines, the eye-drawing colors, of those evil amyloid fibrils?

(Having thus far skirted the edge of hell, with these paragraphs I commit my soul to the inferno.)

I'm quite the Douglas Hofstadter fan, and I'm right in the middle of rereading his book Le Ton beau de Marot: In Praise of the Music of Language. In the introduction, Hofstadter dedicates the book to his wife, Carol, who was "hit from out of left field by a strange and eerie malady with the disgusting name of glioblastoma multiforme... vanishing from our midst almost as suddenly as if she had in fact been hit by a bus, with so much of life still left in her... all cut short by some cell gone wrong." Le Ton beau de Marot is in large part a commitment of their shared soul to paper. The book is stimulating, beautiful, and moving; I cried when I read of her death and his grief. I don't mean to minimize any of that. But I have to disagree with Hofstadter on one point. I don't think the name glioblastoma multiforme is "disgusting". "Strange and eerie", yes, and awful and dreadful (in the sense of inspiring awe and dread). But not disgusting. There's even some euphony, some beauty in the sound of the term.

Of course, I say this as someone who has never lost a close and treasured friend or family member to cancer (and I'm very grateful for that!); I certainly don't blame Hofstadter for describing as "disgusting" a name associated with so much pain and grief. His reaction is completely natural; in fact there would probably be something wrong with him if he didn't react that way. You could as well say that I am incapable of tasting all the bitterness as that Hofstadter is incapable of seeing any of the beauty. Nothing wrong with that.

Tuesday, July 15, 2008

The art of linking

For some reason I find it really irritating when I'm reading a blog post on some interesting topic, and every third word is a link. Especially when some of them are duplicates* or semi-duplicates (e.g. linking to Foo and also to the blog post that brought Foo to your attention, or to several other people's commentaries on Foo and on each other's commentaries!).

I am generally in favor of Wikipedia-style linkage as a means of optionally explaining terms that may need explaining. Nice and inline. They don't interrupt the flow unless the reader needs them to. (This is less generally true of blog links. On Wikipedia, you know any blue link will take you to a factual description of X. A blog link could point to anything, and you can't always tell from the URL.) Surgeonsblog is very good at this, with links to informative pages that explain various jargon terms.

An alternative strategy is to write easier and harder versions of your post, and let the audience choose -- but I've only seen this well implemented once, in Tailsteak's retelling of a poignant D&D story. Since the story is about D&D, the options to offer the audience are fairly clear: "Never heard of it", "Some familiarity", and "Know it like the back of my hand". Whereas people's understanding of some complicated science thing may not divide itself so neatly into levels**. And it only gets worse if you're discussing two or more topics interweavedly. What are you supposed to do then, write a separate blog post for each permutation?
(You'll notice, by the way, that the main difference between the three versions of Tailsteak's story is that the "Never heard of D&D" variant begins with a paragraph describing RPGs in general; the bodies of the three versions are basically identical. A simple link to Wikipedia might have sufficed, but I rather like the three-versions trick.)

* Some kinds of link-duplicating are good. Mind Hacks is in the habit of duplicating all the links in a post into a little pile at the bottom of the post. This can also be a subtle way to suggest reading the links in a certain order, possibly a different order from how they were presented in the post proper.

** I have another post brewing about being stuck between the level of popular-science writing and the level of professional scientists.


I strive to clarify what all my links are, in the text of the post.

Good: I recently read Lewis Thomas' essay Autonomy, which I thought was a really interesting take on the way our bodies operate independently of conscious control.
Bad: Here is a link to a really interesting essay I read, about the way our bodies operate independently of conscious control.

Notice also the use of Google-friendly link text -- helping Google index things properly. Googling things like "here", "this", "this page", etc. is an interesting exercise.


Here's a more coherent listing of the irritating links from the beginning of this post. They're a selection of random interesting things I've run across in the past week or so.

Friday, May 23, 2008

Sign of the times

I finished my last final exam today (woo!!!), so now instead of staying up late studying, I get to stay up late sorting through all my junk and packing some of it in boxes with copious amounts of duct tape -- aka 'packing'. I'm moving to a new dorm on the other side of campus next year, and I'm heading back home to California on Saturday, so I have to hurry up and get all my stuff boxed and hauled across campus before then.

Just now, I was making safety sheaths for a set of cooking knives I bought and never used (because they're mediocre quality, and I just used my pocket knife for everything anyway). Take several sheets of paper, wrap around, duct tape, fold the end over, duct tape again, et voila. It works surprisingly well. For day-to-day storage, these sheaths work fine as is; for longer-term storage, duct tape the knife handles to the sheaths so they don't slip out.

Clearly, though, it is a sign of the times, of the era of online publishing, that I used printouts of PNAS papers to do this. It's the rare, rare paper that I'll print out to read, what with PDFs and having an institutional subscription to everything. Anymore, the only papers I print are ones that I need to read and reread in quite a bit of detail -- say, if I've got to write an essay that addresses the content of that paper specifically. (Intro Psych, I'm looking at you!)

Yes, kids, science saves lives! It keeps you safe! Were it not for science, how would I keep myself from being slashed up by kitchen knives??

Thursday, May 15, 2008

Gallery of nudibranchs!

Check this out: National Geographic nudibranch photo gallery. It's absolutely amazing.

I've seen a couple of nudibranchs underwater, scuba diving in Hawaii. It's neat to see them live and in context, but it being underwater, it's kind of dim and the colors are washed out. Seeing them in optimal photography conditions like this is really cool.

[h/t Pharyngula]

Friday, May 9, 2008

Grad students declared "security threats" by govt

You have got to be kidding me.

This article, Government Declares Some Grad Students Are ‘Security Threats’, appeared in today's issue of The Tech (MIT's student newspaper). A number of international students working with the Woods Hole Oceanographic Institute are being denied easy access to the ports they sail from because the government considers them, for no reason at all, "security threats".

To get in and out of the ports, you need this RFID card, the "Transportation Worker Identification Credential". Without the TWIC, it's very difficult (though not impossible) to get in and out. Difficult-but-not-impossible is a totally unreasonable restriction to impose on these researchers. It's hard enough not having key-card access to the building that contains the lab you're interning in -- *raises hand* -- and even though it's reasonable to expect a bit more difficulty when you're doing fieldwork, it's not Antarctica these students are requesting easy access to, it's a port. And, I might add, these students are only asking for the same access that their labmates and PIs already enjoy.

As the Dept. of Homeland Security wrote to one student (others received similar letters), “I have personally reviewed the Initial Determination of Threat Assessment, your reply, accompanying information, and all other information and materials available to the TSA. Based upon this review, I have determined that you pose a security threat and you do not meet the eligibility requirements to hold a Transportation Worker Identification Credential (TWIC).” This is what we say to students who come here to carry out government-funded research? We give them grant money and then call them "security threats"?

Two of the students being denied access are from Britain and Germany. Britain and Germany. I thought we were supposed to be all buddy-buddy with these countries? If this is what students from friggin' Britain and Germany have to deal with, how much worst must it be for students from, say, Syria?

My friend Raffi, who's from Canada, mentioned how the Office of International Students is always warning them about how "if you do this you'll get deported. If you do that you'll get deported." Apparently the definition of "security threat" bears this out: you're a security threat if you threaten national or transportation security, if you pose a threat of terrorism, if you have "lacking mental capacity"... or if you simply have the wrong kind of visa.

I'm ashamed to live in a country that funds scientists and then treats them this way.

[Crossposted to LiveJournal]

Monday, April 28, 2008

[Buddhism|Hinduism|Catholicism|*ism] is the new black

Cosmo: "Choosing a religion is like choosing a MySpace wallpaper"


In what sense can religion meaningfully be chosen? Ideally, everyone discovers what they believe when they look at the world around them and come to an incontrovertible conclusion. Of course this is not going to happen. Conclusions change, and it's impossible to be 100% sure of anything, given the fallibility of human minds. And for a lot of people it's going to be too much work to really figure out what they think, rather than just hopping onto the nearest appealing ready-made philosophy.

[...Ok, I'll bite. I am a hard agnostic: I believe that the nature of deity is both unknown and unknowable. (I used to be a soft agnostic -- wasn't sure whether nature of deity was knowable or unknowable). I believe in questioning but remembering the limits of our understanding, and retaining a sense of wonder.]

I'm honestly not sure what to think of this article (other than the obvious "oh, look, more dreck from Cosmo").

I loathe the aspect that promotes religiosity for appearances' sake. If you're going to take "a shot of Catholicism, a sprinkle of Buddhism, a pinch of Hindu teachings — or whatever else you're in the mood for that day", why even bother? All you're doing is quote-mining to justify what you already (want to) believe. Practicing confirmation bias. Enough of that: just believe what you believe, and stop gilding it.

I'm all for the aspect that promotes having your own worldview, instead of blindly accepting whatever some robed old guys with books say, or whatever your parents said. Anything that promotes seeking and questioning is a good idea.

In the same vein, I'm all for anything that diminishes the absolute blind fanaticism with which a lot of people follow their religions. Even if you simply accept a pre-made worldview, there's nothing that says you need to accept it to the extent that you need to go kill people or spread malicious lies and hatred because of it. But I hate to like the Cosmo article simply for this reason. There is something wrong with the world if Cosmo's viewpoint is the lesser of two evils.

Friday, April 25, 2008

Trivia-heap syndrome

Part of the reason I think a lot of hard scientists look down on biology is that introductory biology is so often poorly taught in a particular way. Intro physics is almost entirely problem-solving, and intro chemistry is similar, with maybe a bit more memorization (Quick! How many valence electrons does aluminum have?). But biology frequently ends up being taught as a large heap of random terms and facts, almost entirely without any unifying themes or methods of thought -- just a disconnected jumble.

I've dubbed this "trivia-heap syndrome".

Any biologist (or, hell, anyone who's gotten past the required intro course) can tell you that biology is not about memorizing terms and facts, any more than physics is about blocks sliding down inclined planes. But as Chad Orzel says in that post: "To some degree, this is inescapable-- those repeated exercises are used to establish a pattern of thought that is a necessary prerequisite for moving on to more interesting material." A similar thing could be said of biology: to some degree, it's necessary to memorize a lot of terms and random facts and unconnected processes and so on, before you can get to the interesting work of studying how they interact and how they can be manipulated.

But this is actually true of every field, not just biology (or other fields prone to trivia-heap syndrome); it's just not as apparent. Consider intro mechanics again: all about things falling, hitting each other, rotating, etc etc. In order to do interesting things, you first have to know what balls, rods, strings, pulleys, blocks, inclined planes, and gravity are. You also have to understand the basic types of things they can do: move, rotate, accelerate, come into contact, break, exert forces on each other. Of course these are trivial things to know, because we've all been exposed to simple objects and their motions since we were born -- and this is why intro mechanics courses don't begin with a couple weeks of definitions and memorization. The only difference between that and biology is that we're not exposed from birth to genes and proteins and cells and their interactions. The world teaches us the trivia-heap for physics, but we have to be taught about biology's trivia-heap.

All right, so the trivia-heap is an unavoidable evil whenever you're starting in a new field. Fine. But never fear, there are still ways to get around trivia-heap syndrome. As soon as you know a very few things, you can start thinking about them in terms of experiments to be done and puzzles to be solved, instead of facts and descriptions. What would happen if this particular thing were mutated in this way? What effect would that have on the cell? How could you (the experimenter) tell that this was in fact the case? What if you got the opposite observation -- what might have gone wrong? Here's a simple system you're interested in; outline an experiment to find out whether this particular part of it works this way or that way. This is both interesting and a lot more like what actual biologists do; certainly much more so than the typical dreck of "Define a gene", "Outline how a gene gets translated to protein" that most high-schoolers get shoved down their throats. That's lazy teaching for you (or the creeping horror of bad standardized curricula for things like AP tests, which reduces to the same thing).

I learned a decent bit of biology just by reading random things. I can practically recite The Cartoon Guide to Genetics, by Gonick & Wheelis (that link is to the updated edition, not the old-school edition I read). I also took an introductory class at one of those academic summer camps. The net result was that I came into my high school biology class already knowing about half the stuff we would cover, which just made the trivia-heap syndrome that much more painful. A lot of my classmates struggled with memorizing things. Very few people did well on the "lab practical", in which we had to plan and carry out experiments to identify a mystery substance -- precisely because we spent so much time on trivia-heaping and so little time on problem solving / sensible experiment design. MIT's intro biology course (7.013), by contrast, is like a breath of fresh air. It's not entirely free of trivia-heap syndrome, but there's a gigantic emphasis on solving puzzles, considering what-ifs, proposing experiments, and interpreting results. (`Gigantic' emphasis relative to other intro biologies, of course.)

Granted, even MIT's intro biology problem-solving is simplistic, and occasionally feels like doing an inclined-plane problem in physics (especially when I know a decent bit about a particular system, and I can tell the problem vastly oversimplifies the situation, even if I don't know exactly how). But now we're back to Chad Orzel, inclined planes, and "establishing patterns of thought". There is a certain intuition of biological ways and means, of how cells/genes/proteins work in broad strokes, that is immensely valuable but hard to obtain. This intuition is something like a toolkit of abstractions over specific examples, but I'd venture to say it's not something that can be taught explicitly in the abstract, the way math can. The analogous thing in physics, again, is something the world teaches us from birth: unsupported objects fall, if you push something its speed changes; that sort of general idea about how things operate. I've developed my biological intuition somewhat, through reading and studying (and sometimes working directly with) boatloads of examples, and I think that's just about the only way it can be developed. But it's something every experimenter (or bioengineer!) needs in spades, and it's something the general public could also really use.

So, to sum up: the trivia-heap is sometimes a necessary evil, but trivia-heap syndrome is eminently avoidable. Emphasize puzzle-solving and experiments, instead of facts and definitions, and it'll do everyone a heap of good.

Sunday, March 16, 2008

Ah, greenery

If you had ten (soon to be eleven) plants to name after scientists, who would you name them after?

[I've counted the twin bamboo shoots as one plant because I've got them tied together like those ones you see in Chinese kitsch stores -- and thinking of naming them Ramón y Cajal.]

Wednesday, February 6, 2008

But is conlangery useful?? [Linguists and Conlangers, part 2]

Yes, it's been quite a while since the first installment of this series. I wrote a whole big long draft of this post and then realized it was mostly wrong, and it took a while to get back on track.

First off, I am not saying that conlangers are oppressed or shunned or anything by academic linguists. Far from it. It's often said that conlangers have a "persecution complex" in this arena -- i.e. we're whiny little geeks. If anything, conlangers get more friendship from linguists than from the general public. That said, most linguists don't exactly brim over with warmth and welcome for practitioners of the secret vice. Conlanging is typically regarded as a kind of silly little hobby or diversion: nobody cares that much as long as you don't let it interfere with your work.

I will admit that there is one anti-conlanging argument that stings me and makes me feel guilty: "Why aren't you out in the field, saving dying languages? You're squandering your talent, skills, interest, and time!"

This argument seems a lot more valid than its generalized counterpart, "Why aren't you working against Great World Evil #42373?" Linguistics interest and expertise are relatively thin on the ground, so it's harder for the cause of saving dying languages to get enough resources/people behind it. Conlangers are often so interested in linguistics that they pursue their interest even when it harms other aspects of their life. Not for nothing do people call it addictive.

But seriously? Firstly, most conlangers just don't have as much spare time as people seem to imagine. Secondly, and more important, most conlangers don't have the expertise to do linguistic fieldwork. A goodly number pursue linguistics in college, but you need a lot more than a bachelor's to do fieldwork. I only know of two conlangers who are linguistics grad students (undoubtedly there are more, but not many more). Most of us have day jobs. We're hobbyists. Would you want model-airplane hobbyists fixing fighter jets?

The other main arguments against conlanging are that it's frivolous, pointless, a waste of time, etc. And for this I can't do better than to point you to The Conlanger's Manifesto, by David Peterson, which eloquently defends conlangery as an art form:
...Looking only at the utilitarian end of it, if the creator isn't going to use his/her language for communication, and since language can be viewed only as a means of communication, language creation is pretty useless.
But is this all language is: A method of communication? If so, what is poetry? what is literature? What possible use could James Joyce's Ulysses have? I suppose if you were on a desert island and needed to smash crabs, it would do the trick—it's pretty thick, after all. But beyond that? According to them, it would have no use. And why stop there? What good do paintings do anyone?...Pretty soon what you're left with is a world without art.
At this point, the argument should come to an end. The rigor and usefulness of art is an argument that has been argued many times by many people much more articulate than I, and by now (I certainly hope), the whole world should have figured out that art really does pull its weight on Earth.

Amen, brother. Conlangery is the art of linguistics, and it should need no more defense than that.

Occasionally, you'll run across linguists actually using conlangs. If you're teaching a linguistics course, occasionally you may want to illustrate some particular concept by using a conlang. That way, you can design the conlang to show off the feature to its best advantage, and avoid all the irrelevant noise/irregularities that you'd get if you used a natlang. It's the same principle as medical illustration: realistic, but with all the fat trimmed away. Just try and find an illustration that shows the pancreas without trimming away a whole load of other viscera! It's damn near impossible. Conlang examples serve the same purpose in a linguistics class. The goal is transmission of key information, not pinpoint-accurate naturalism.

Another, less-used, possibility is to have students create a miniature conlang that exemplifies a particular feature. I've seen a worksheet that taught the basics of ergativity very effectively this way. Unfortunately, it wasn't from an actual linguistics class. It was from a presentation at the 2nd Language Creation Conference. Er, I mean, a presentation that never took place (it was sacrificed in favor of another panel discussion due to time constraints): "Applications of Conlanging in Pedagogy". You can see the worksheet on page 13 of the PDF of the program. I can't genuinely speak from the perspective of a student, because I already knew what ergativity was at the time, but I really think it would have been very effective as a homework assignment, because it gets you actually working with an ergative system.

But the most interesting intersection of conlangs and linguistics, in my opinion, is in the domain of research. A lot of studies have come out recently, where people are exposed to a small conlang and then their learning success is tested and studied. This is a really exciting paradigm, because you can engineer your language to have certain traits in the area you're interested in, and to be `normal' or `easy' in all other areas. If you're testing acquisition of (say) verb-subject-object word order (English is subject-verb-object), you can make your language have perfect consistent VSO order and be completely unremarkable in everything else. Good luck finding a natlang that fits that bill. It's the same cutting-out-the-noise principle we saw earlier.

Or, if you want to put in noise and study how people will deal with it, you can manipulate the type and amount of noise to a very high resolution. Likewise with (say) syllable-transition probabilities in a word-segmentation study (if ba is always followed by ka, but ka is followed by a whole lot of different things, you can deduce that fooBAKAbar is the words fooBAKA and bar, not fooBA and KAbar.)

I'm currently involved in a research project using this paradigm, and I think it's an incredible tool. Look out for a lit review in the near future!

There are some conlangers who would like to see descriptive research done on fully-fleshed-out conlangs. And I hate to say this, especially because I'm good friends with this crowd, but I disagree. I don't think that investigating and describing conlangs can tell you much about the range and properties of human natural language. (Most academic linguists, I think, would agree with me.) To find out anything about human natural language, you ought to study human languages that have evolved naturally, instead of ones that were consciously invented. You can make a conlang with any conceivable twisted logical structure; the structure may be possible, but that doesn't mean it would ever evolve naturally, or that it's reflective of anything in natural language.

But since conlangery is the art form associated with linguistics, you could do the same sort of humanistic or aesthetic studies that you do on paintings (the art form associated with vision science). What do people find elegant or appealing? What conlang properties elicit what subjective reactions? What bizarre structures have what bizarre effects on people trying to use the language (the linguistic analogue of optical illusions)? And this could have implications in the study of natlangs.

Next up: the learn-a-conlang paradigm in language acquisition research.

Thursday, January 31, 2008

My Erdős number is 5

Back in high school, when we got around to it (about once a year) the Neighborhood would publish an issue of the Menlo Math Magazine. It was sort of a random collection of real-world-relatedness, whimsy, and interesting problems. Not an academic publication by any standard. But that didn't stop us all from facetiously claiming an Erdős number of 6, when we heard that Mr. T's was 5.

I just found out that mine is actually 5, and this time it's damn near official, not tenuously based on a high school pamphlet. Over the summer of 2005 I worked on the Stanford ALL project. Mostly as unpaid labor, admittedly, but I did sit in on the meetings and I did point us to a couple of valuable data sources. This meant I got to interact with some big names, however briefly.

According to Language Log, the lowest Erdős number known for a linguist is 2, and Geoff Pullum's is 3. A quick Google Scholaring shows that Arnold Zwicky and Thomas Wasow, therefore, have Erdős numbers not greater than 4. And they were both involved in the ALL project, as was I, which makes my Erdős number not greater than 5.

...All right, none of us are actually authors of the paper in question. I appear in the acknowledgement footer on the first page, along with the other students involved in the project. But Zwicky and Wasow appear in the same footer, as does John Rickford (who doesn't have any papers coauthored with Pullum, at least on Google Scholar). (Well, OK, all those big names appear in the references, which I don't.) Their contributions to the project were certainly substantive enough to qualify criteria for what counts as `collaboration'. My contributions were far less substantive, but I still think they counted for something, and if I appear in the same acknowledgements (the same parenthesis in the acknowledgements, even), that's got to be worth something.

(Naturally, this all comes with the caveat that we've been counting non-strictly-mathematical publications, but that has plenty of precedent.)

Wednesday, January 23, 2008

It makes a fella proud to be a scientist

Pharyngula linked to this amazing video that "compares creationism and science". Really, it's no comparison. [3:54]

You could watch it without the music -- none of the strictly visual beauty would be lost -- but instead, I highly recommend turning your speakers up to eleven, putting the video on fullscreen, and inviting all your officemates to come watch. Better yet, invade a conference room or a lecture hall for five minutes. It'll be worth it.

I found it incredibly affecting. Some parts had me cheering and yelling "SCIENCE: IT WORKS, BITCHES!". And some had me tearing up. I felt like a little kid again, all "when I grow up I wanna be a scientist!" I'm damn proud to be a part of it (or at least to be on track to become part of it).

Monday, January 21, 2008

Math lagging (or leading) physics

Senior year of high school, I took AP Physics C and AP Calculus BC concurrently: allowable, but not advisable, the admins said. For a good part of the year, I struggled with physics because I hadn't `really' learned the math yet. Reading the textbook, I'd just skip over derivations that I didn't understand, and hope that the end result was something I could memorize, if not comprehend. Mr. T held a session or two of `physics math', which helped a little, but didn't make me feel less intimidated.

Last semester, I was in pretty much the same situation, only with line and surface integrals instead of all integrals. Oh, we never had to evaluate any -- they were always of the simple kind that reduce to multiplication problems -- but it took several tries to be able to recognize those situations, and remember how to reduce them to multiplication problems. Didn't solve the intimidation problem, either. On the final exam, even though I knew what to do and how the problems worked, I still felt like a trespasser for writing down those fancy-schmancy closed-surface double integral signs, knowing I didn't really know how they worked.

At one point Prof. Hudson said something (can't recall the exact words) that gave the impression this math-lagging-physics business was a dreadful intractable plague, oh me oh my, whatever shall we do. And at the time, struggling to understand everything, I was rather inclined to agree with him. But since then I've somewhat reversed position, or at least become neutral.

Back in high school, after I'd learned to fudge my way through integration, I had a much, much easier time with it when we `officially' learned it in math class. This was especially noticeable in the case of integrating (still in a single variable, though!) over 3D objects. We'd been doing this routinely in physics, to find the moments of inertia of various shapes. So while the rest of the calculus class was struggling to put together the relatively new concept of integration with the relatively new concept of bizarre-shaped objects in 3-space, I could relax.

Recently (within the past two weeks), the same thing happened for me in multivariable calculus. I'm in 18.01A/18.02A, which is a blend of two courses: a quick six-week review of single-variable, and then the regular multivariable curriculum, finishing over the January term. Because of the way things are timed, I never saw a line integral in 18.02A until just now, well after the end of my physics class. And the same thing happened: I'd learned to fudge my way through easy line integrals, and that gave me a leg up on people who'd never seen them before. I had a well-developed intuition about what line integrals meant in terms of the real world, which kept me from getting confused. That in turn allowed me to focus on the complicated aspects that we hadn't worked with in physics.

Given these experiences, I don't think it's a very bad thing at all that math education tends to lag physics education. It can often be a good thing: physics is leading math, to frame it positively. When you tackle something complicated and abstract, having `live-fire' experience with simple cases and workarounds can hardly do anything but help.

This comes with some caveats. It's a classic example of delayed gratification: I sure didn't feel happy about math lagging physics when I was actually struggling with the physics, and it was only a month later that I was able to grin to myself in calculus recitation, watching the people around me struggle with a problem I'd found obvious. (Schadenfreude, too, is not necessarily a good thing. In my defense, I do help my peers if I can, and if they're not already being helped.) And naturally, it's a tradeoff; who can say if knowing the math beforehand would have brought my physics grade up a little?

But most importantly, if physics is going to lead math, physics has to be taught right. I approve very strongly of Prof. Hudson's philosophy here: conceptually difficult problems with easy math = win. If you absolutely must be bashing your head against the wall at midnight the night before the problem set is due, it shouldn't be because you can't solve a differential equation. It should be because you can't figure out which way the induced current goes. Physics can be taught adequately without using too much complicated math (though of course the definition of `complicated' will vary with the physics). You learn just as much integrating (read: multiplying) over a sphere as you do integrating (read: flailing) over the surface of some crazy-shaped thing.

You wouldn't expect it, but knowing too much math can be a problem. Last semester's physics class was taught in a rather unusual format, about which more later. The relevant thing is that I spent the whole time in a group of three. There was me, there was S, and there was L. S had excellent intuition, and tended to approach problems by thinking them through qualitatively first, and then seeing if the math bore him out. This was really a very effective strategy -- a lot of the time the math looked right, but his intuition cried foul, and then we found the sign error twenty steps back. L, on the other hand, was advanced in math, and already pretty far beyond what the physics course used. She approached problems very mathematically, instead of with intuition like S. Even when something made complete physical sense, she wasn't satisfied until she justified it mathematically -- even if it was a qualitative problem and didn't have any equations in it to begin with. and because of how every other thing is backwards in E&M, she tied herself into an awful lot of knots. (Not that she wasn't brilliant; I think she outscored both S and me on every test.) L and S complemented each other, and they were a great group to work with. But I have to say that S's intuitive method was the more effective. To the same question, S might answer "because the charge is over here so the potential is higher here, and then this happens"; L might answer "because the potential is [insert equation here] and the electric field is the negative gradient of the potential."

I admit that this is based on my own experience, and what's better for different students will vary. I said that it's easier to learn complicated abstract math if you've first grounded it in physics; you could just as well say that it's easier to learn physics if you're well practiced with the mathematical tools already. But I think the first argument is more valid than the second. It's easier to abstract from the concrete than it is to learn the abstract first and only then get concrete examples. You see the same thing in computer science classes: pick up a copy of the blue wizard book and on every other page you'll find a discussion of how to abstract some instance, not how to instantiate some abstraction. Even though physics was made genuinely more difficult by not knowing the math, I'd still rather have done it this way around.

Briefly on Sitemeter and privacy

For the sake of full disclosure: yes, I have a Sitemeter on this blog. It's the free version, which means I can't see anyone's full IP address (just the first three numbers). I can, however, see your location, your ISP, a crappy estimate of your latitude & longitude, what page you came from, and what page you exited to (sometimes; this last one seems to be borked).

I have it on the "medium" privacy settings, which means that all those details are not visible to members of the public, but that certain stats may be given out to indexing sites -- the textbook example is giving out the hit count to a service that ranks sites by number of visits.

I'm not all that interested in most of the information that Sitemeter collects. I mostly have it as a hit counter, not anything fancier. I do get a kick out of seeing what random cities people are visiting from, or what random Google search terms bring me up in the results. It's rather annoying that Sitemeter's privacy options are pretty much all-or-nothing: I can't allow members of the public to see, e.g., just the Google search terms or just the geographic locations, without giving them everything, including partial IP addresses. Grr.

But at some point I think I'll compile a list of the more interesting Google search terms that bring this blog up. Might be amusing.

Friday, January 18, 2008

Feeling a teacher's love

In high school, we had a teacher, Mr. T, who taught computer science and some of the advanced math courses. And he was wonderful. Everybody loved him. I've never encountered so much infectious enthusiasm, even at the ungodly hour of 8am on a Monday morning. He could light us on fire by drawing box-and-pointer diagrams. We had a party the day we learned the Fundamental Theorem of Calculus. Mr. T was a mentor, a friend, a father, a rock; his room was a haven for geeklings. Everybody loved him, and he loved us too.

I understood his love in a rather abstract sense back then, and up until this morning. But this afternoon, I understood it from the inside.

The research I'm doing at the moment involves teaching pairs of people a constructed language by immersion, and then having them take a test. (More details on this (very interesting) paradigm in an upcoming post!) Today was only the second time this experiment has ever been run, so I'm also happy just because it worked. We ran two pairs of people.

I had an awfully hard time teaching the first pair. They just seemed to fundamentally not get a lot of the grammar. It was very difficult to keep myself from grabbing one guy and telling him, loudly and in English, that "the way to turn a sentence into a question is NOT just to say the sentence with a rising intonation at the end!!!". And so on, and so on. Teaching them was frustrating, occasionally painful.

And then the second pair came in, a pair of undergrads, and they more than made up for the first pair. It only took them twenty minutes to get to more than adequate proficiency, where it had taken the first pair almost an hour to get to less than adequate proficiency. They seemed to pick everything up right away. It was like watching a rose bloom in time-lapse. My favorite moment was when I taught the question form, using only one verb in my examples. When I prompted one of the participants, she immediately came up with several correct question-form sentences, and she generalized to all the verbs we'd learned, not just the one I demonstrated with. If we hadn't been in an experiment, I might have proposed to her on the spot.

I graded the four tests just now. The first pair had decidedly mediocre scores, no surprise there. But the second pair did extremely well, and they both did just about perfectly on the part we're most interested in. I was overjoyed to see them using linguistic terminology correctly to explain their answers (even though their knowledge of terms had nothing to do with how well they acquired the language). When I was done grading, I spontaneously picked up the last test and kissed it.

Tonight, I think I experienced a little of what made Mr. T such a good teacher, what made him love his students and his work. It's not an easy feeling to articulate. Pride and joy in students, enthusiasm for the subject, and a little bit of self-satisfaction that, yeah, I taught them that. The feeling was very powerful, even though the teaching I did today wasn't very real. What if I were teaching material that was beautiful and meaningful, instead of an arbitrary constructed language? What if I could teach for an entire semester and witness long-term progress and synthesis, instead of bidding goodbye after an hour? What if I were teaching students who learned for love of the material, not because they were paid to be subjects in a study?

For the record, this is not the first time I've taught, nor the first time I've found it satisfying. For a couple of years I tutored 6th graders in math and Japanese, and it was always great when they'd get a flash of insight after a long hard slog. (They were remedial students, so it wasn't often.) And in the Karate club, it's traditional that you help teach the people who rank below you. I was co- leading brown belt my senior year, so that meant I taught just about everybody. I always enjoyed seeing some yellow-belts perform a technique really well, and thinking "Yay, they remembered X subtle point I taught them about!". But I've never felt a teacher's love as strongly as I did today; certainly not strongly enough to merit using the word `love'.

If I follow the track I'm planning to follow, I'll end up a professor. I understand that teaching is mostly tedious, frustrating, and difficult, not full of brilliant-student-love. But it's the possibility, the hope that springs eternal, and when it's fulfilled it makes up for everything else.

Tuesday, January 8, 2008

Problems with the LJ feed

I recently made some minor edits to "my experience with lab mice", and everything seemed normal. But for some reason the LiveJournal feed saw fit to repost the entire post, clogging up people's friends pages. Meep, sorry!

Funny thing is, this didn't happen anywhere else. The native feed is normal, the blog itself is normal. Only on LJ did the post show up again. And apparently there's no way to go into an admin panel and fix this; it's fully automated. Grr.

Monday, January 7, 2008


You might have noticed the shiny new section in the sidebar. Yes, I finally got around to making a blogroll.

...Well, more like a catalog of the RSS feeds I follow. Everything in that list is something I read religiously (which, given Google Homepage, is a much weaker statement than it used to be). I've seen blogrolls that have hundreds of links to blogs whose names start to seem awfully similar, and which sound like they cover mostly the same content -- so, more of a "recommended reading" list than a reflection of what the blogger actually reads on a regular basis.

Eventually, I'll flesh it out into a proper blogroll, but for now, I'm not experienced enough to evaluate a blog without following it for quite a while.

Edit: Also, I would have included my brother over at Mage's Plane, but he's just gotten started, so I'll give him a little time to build up steam. Right now what he gets is a tongue-in-cheek for using the same Blogger template as me, right down to the color choice. *shrug* Yes, this is a very common template.

Black vs. white backgrounds (also, chalk)

(No, this post is not about race).
The other night, I and my friend 4pq1injbok were discussing the compelling, overarching, enormously important issue of black text on white ground vs. white text on black ground. (Woo, having spare time!)

Of course, each option is appropriate in different contexts. If you're printing something out, you better have a good reason to do it in white on black, because that wastes so much ink/toner. But for viewing on screens (including projection screens), I vastly prefer black on white because it reduces glare and seems to make my eyes less tired. Mostly the former. Glare is a problem especially on very large projection screens: you can dazzle (in a bad way) an entire lecture class with a white-backgrounded Powerpoint, especially if the projector is having an off day. Not fun early in the morning.
What troubles me, then, is that there seems to be some kind of unspoken rule in academia that you can't use black backgrounds, and I'm not entirely sure why. I watched a dry run of a job talk a couple summers ago, and in the midst of everybody commenting on every facet of every slide, my mentor explained to me that there's just a certain way scientific presentations are done, which includes black text on white ground. The Google example is telling, in one sense -- Google is renowned for the "clean" look of its pages, which I'm sure has a lot to do with being white-backgrounded. Perhaps there's a lot of pressure from older academics who are more comfortable with print on paper. Perhaps, also, it makes pictures easier to see, but you can fix that with a relatively narrow white border. I think we should take a hint from the movie industry; when was the last time you saw the end credits in black on white?

(Also, using black backgrounds probably saves energy -- I don't know any numbers, but the savings must be considerable for large screens. There's even Blackle, an almost entirely black version of the Google homepage. Supposedly if everyone used it, we'd save ~750 MWh a year (hat tip for the calculation to ecoIron).)

Chalkboards are the only common medium I could think of where the default is light text on dark ground. (Not to say there aren't others; it's just that I couldn't think of them.) Nothing profound about it; chalkboards are this way because CaCO3 is white, and slate is dark gray. But it does raise an interesting question. Suppose you draw two circles on the board, and fill one in with chalk while leaving the other one empty. Which is black, and which is white? Do most people just seem to naturally agree, or is there some kind of consistent convention, or is there no consistency at all?
In my music theory class, and anywhere I've seen musical notation on a chalkboard, the convention is that quarter notes are filled/white while half and whole notes are empty/black. In print, it's the other way around, at least with regard to absolute colors. But in this case, you can define it in terms of filling, not color, so it's a bad example. I'm rather tempted to crash a game theory class on the day they discuss chess/checkers, or some kind of visual arts/design class on the day they discuss positive vs. negative space, or take a poll of professors, or some such.
Related linguistic issue: on the Improbable Research blog, a prof rants about the suckiness of Crayola's new chalk, complaining that "the new pieces are thinner, shorter, and don't write as dark", using "dark" to denote degree of whiteness. But "don't write as well" lacks specificity, and "don't write as lightly" would be interpreted the wrong way around. Sounds like "heavily" (or perhaps "cleanly") might be the way to go.

And finally (and randomly): hooray for profs who know how to use colored chalk well, to highlight information and obscure noise, without overusing it. More on this subject if I ever get around to reading Tufte's books.