Monday, December 17, 2007

Finals

I know it's been a while since I last posted. It's finals week, which means I'm basically studying all the time. I have a bunch of posts in the works,

I just took my first final, physics (8.02, intro electricity & magnetism). I felt really good about that one; everything seems to have come together really well and comprehensibly in my head in the past several weeks. I give the credit to the professor's very intuitive style of teaching and understanding everything, and to his fondness for conceptually difficult questions with easy math. That is how to teach physics, IMHO. (I have a lot more to say about the course format, TEAL ("technologically enabled active learning"), but that's a whole nother post.)

I'm not really so worried about tomorrow, when I have my midterm in calculus (18.02A). I'm in a kind of funnily-put-together course, where you spend the first six weeks reviewing single-variable calculus at high speed, and then follow the regular curriculum for multivariable calculus, finishing over the January independent activities period. It's primarily intended for people who took AP Calculus AB. I took AP Calculus BC and passed the AP exam with a high score, which would qualify me to jump straight into 18.02 (straight multivariable calculus), but at the beginning of the year I felt like I needed the review. I could have done without it, but it was nice to have. The only problem was that it put me farther behind in terms of math lagging physics (a perennial problem), but the math in my physics course is not actually that hard (e.g. all the line or surface integrals we ever have to do are simple cases that reduce to multiplication problems).

The one I'm worried about is chemistry (5.112), on Wednesday. Most of the content on the test will be drawn from the last several weeks of the course, and that's the material I understand the least. The course is taught by two different professors -- one does the first half and the other does the second half -- and the second guy is very difficult to understand, for various reasons which I will elaborate in the future. It's a blessing that it comes last, really; I have about twelve hours this afternoon/evening for studying, and I don't intend to devote all of them to calculus.

MIT really treats its students very well during finals week. There are free breakfasts in all the dorms, as well as one in the lobby of the building where the biggest finals are held. (And not cheapo lousy free breakfasts, either -- good ones. This morning, for the first time all semester, I had eggs and pancakes! It was awesome!) There are a few free lunches and dinners scattered around, and all the food vendors in the Student Center are giving discounts. Alpha Phi Omega holds "Finals Coffeehouse", with snacks in a room of the Student Center that used to be a 24-hour coffeehouse but is now just a sort of miscellany room with tables. MIT Medical holds relaxation seminars, even. This is characteristic of MIT, really: they pound you into the ground academically, and then they reach down and lift you back up...so they can pound you again.

More content to come over winter break. I've got a lot of saved-up commentary on various aspects of the teaching here.

Sunday, December 9, 2007

What's This `Conlangery' Business? [Linguists and Conlangers, part 1]

[This will be part of a series of posts about conlanging and its interaction with academic linguistics.]

So, what is conlangery? It's not a popularly known art, so here's a primer for y'all who haven't ever heard of it.

The word conlang is short for "constructed language". Well-known conlangers include Ludwig Zamenhof (Esperanto), Marc Okrand (Klingon), and the revered JRR Tolkien (who once wrote that he invented Middle-Earth largely in order to give his beloved Elvish languages a place to live). We invent languages: spoken languages, signed languages, artificial siblings and alternative scripts for natural languages, artistic languages, logical languages, international auxiliary languages, mind-extending languages, you name it.

(I should warn you that conlangers, being naturally fond of playing with words, habitually `overgeneralize' and apply all kinds of grammatical forms to the word `conlang'. Just for starters, it's both a noun and a verb. A conlanger is someone who conlangs, what they create is a conlang, and the art in general is conlanging or conlangery. On top of that, the con- prefix has become productive, meaning it can apply to all kinds of invented/imagined things: conlangs, conworlds, conscripts (meaning writing systems, not military draftees), conreligions, consolarsystems, conbiology...the list goes on.)

The earliest known conlanger is Hildegard von Bingen, a 12th century German abbess. She's best known for her gorgeous music, but she also invented a language called Lingua Ignota ("unknown language"), which supposedly came to her by divine inspiration.

The best known conlanger...well, it depends who you talk to, but most people will name either Zamenhof or Tolkien, and here we come to a major split in the world of conlangery: that between auxlangers and everybody else. Auxlang is short for "international auxiliary language", and not to make any sweeping generalizations here, but the sort of people who make auxlangs are also the sort of people who have fiery and/or highly unconventional political agendas, and a lot of them are also shamelessly self-promoting. Unfortunately, if you're auxlanging in earnest, this isn't really fixable. You have to want the entire world unified, in some sense, by your language. And you have to think your language is not only `good' enough for everyone in the world to speak, it's `better' than the dozens and dozens of old failed auxlangs. A certain amount of crankery is inherent to the practice of serious auxlanging.

(Quite apart from the above, the vast majority of linguists and conlangers have a horrible visceral reaction to the thought of losing most -- if not all -- of the world's linguistic diversity. Who wants to replace all that culture, knowledge, and sheer beauty with something necessarily bland?)

(Yeah, and, in case you couldn't tell, I'm totally biased, and I freely admit it. Auxlangs are not all bad; any conlangery effort bears some useful fruit.)

So what about the `rest' of the conlangers? People conlang for all sorts of reasons, but the one that really unites us is, trite as it sounds, is love: love of language, its beauties, its intricacies, its elegances; and love of playing around with that in systems of our own creation.

This is not to say that there aren't divisions within the body of non-auxlangers. Probably the biggest group is the artlangers, like Tolkien, who conlang for aesthetics and elegance. (This is not to say that all artlangs end up looking like Tolkien's Elvish languages, all full of L and R and vowels everywhere with nary a `guttural' consonant. Conlangers have as much variation in taste as the general population. Plus, for example, there's a lot to be said for a language in which you can swear effectively. Imagine trying to shit-talk someone in Quenya or Sindarin.) At the very least, many hundreds of artlangs have been made, or at least started. My own language Tlharithad is a young artlang, though I sadly haven't had time to work on it since the beginning of the semester. You can find quite a few well-developed artlangs, associated with the fantastic conworld of Verduria, at Virtual Verduria, made by Zompist. I have to apologize to the many, many, very worthy artlangers that I haven't linked to, but for non-conlangers, Zompist's work makes a good place to start.

On the flip side of the same coin, you have the engelangers -- engelang is short for "engineered language" -- who design their languages to achieve a particular goal. There are logical languages, like Lojban, which are designed to eliminate ambiguity. And there are other engelangs, whose design goals don't really fall naturally into groups. A seminal example is Ithkuil, which has about five times the information content per syllable of natural languages. In the words of its creator, John Quijada, Ithkuil is "systematically designed to blend a high degree of communication of cognitive intent and meaning with a high degree of efficiency, i.e., to allow speakers to say a lot in as few syllables as possible." While I'm not intimately familiar with the language, I know the making of Ithkuil involved a lot of mindbending reorganization of cognitive concepts. For example, if you're indoors, the spatial axes around which you organize your speech are placed with respect to the long axis of the room! (Of course, JohnQ freely admits that Ithkuil is extraordinarily difficult to learn, and has in fact created a somewhat simplified version, called Ilaksh, for those of us without superhuman vocal tracts.)

It can be argued that artlangers are just engelangers whose design goal is that of beauty, rather that something more conventionally associated with the words "design goal". It's a little like the distinction between architects and sculptors: you have people who are clearly one or the other, people who are mostly one with a little of the other, and people like Michelangelo or Maya Lin who straddle the boundary so well that no one ever finishes arguing about which category they fall into. (And, as good linguists, we have no problem with this, knowing that strict definitions are artificial constructions, and everything is better described in terms of generalizations from prototypes.)

Next up: the enmity, such as it is, between conlangs/conlangers and professional linguists / linguistics research.

Thursday, December 6, 2007

Another essay on killing lab mice

Ran across this essay, which aired on NPR's All Things Considered five years ago. I especially liked this bit:
When I started, sometimes I had to go walk around the lab building afterwards to take a breath and gather myself. Now it still upsets me whenever I have to kill mice, but I'm used to it. I think that probably some scientists do get over it completely; the mice become tools. Other scientists hire technicians to handle and kill the mice. The researcher works with the cells of the mouse once it's killed, but never encounters the living mouse.

But that's not how we do things where I work. In fact, the people I get along with best in my lab sometimes even talk to their mice. While we're herding a mouse towards one side of a cage or another we might say, "Come on, sweetie." Maybe if we're injecting a mouse with something and it squirms, we say, "OK, OK. It's going to be just a second." When we put it back in its cage, we might say, "There you go." And while the mouse is sniffing and inspecting its cage mates, we say, "There's your buddies." I was in the mouse room with a colleague of mine and I noticed her doing this. And I said, "You talk to your mice, too." She said, "Doesn't everybody?"

It really captures the tension between mouse-as-tool and mouse-as-animal. And the -- well, I don't want to call it a happy medium, but the medium that most good scientists find. I dare say that working exclusively with cells, and never encountering the organisms they come from, has got to be dissociative. Seems like it'd be a good idea to keep that perspective in the back of your head, that these little spots in a dish came from a living animal, and serve multifarious purposes in that animal, and do things besides sit in culture and express your marker protein or what have you. They grow, they thrive, and above all, they are more complicated than they seem, even when you take that last into account.

A scientist who gets too debilitatingly upset over the death of a mouse will never get anything done, and a scientist who doesn't care about the mice as living creatures will have less perspective and, in all likelihood, get worse results for not being personally involved with their care.

I don't recall ever talking to the mice, but I didn't handle them all that much. Pretty much all I did was scruff and snip, and then they couldn't listen anymore. If I have to do more involved mouse procedures in the future, I probably will get in the habit of talking to the mice, especially given that I already even talk to inanimate objects when I'm working on them.

Stir-fried wikipedia, anyone?

The highly estimable Language Log points out yet another instance of Chinese menu translator ingenuity. Wikipedia: now with even more uses!

For more on this phenomenon, see Engrish.com (marginally NSFW), which is devoted to collecting this sort of unintentionally hilarious bad English, mostly from Japan. Aside from just giggling at the bad translations, it's interesting to occasionally catch a nugget of linguistic insight. Well, I mean, the fine folks at Language Log can catch them all the time, but I'm an amateur, so I have to take what I can get.

A while ago, Engrish.com featured a shirt (which I unfortunately can't seem to find now), bearing the sentence "We are dumb and haven't intelligence apes." And I jumped out of my seat, because it made perfect sense to me. In English, that sentence means "we are dumb and do not possess apes of intelligence." Presumably the shirt-writers meant "we are apes who are dumb and do not possess intelligence." (Perhaps dumb here means mute rather than unintelligent -- a relatively subtle distinction, but I've seen Japanese<==>English dictionaries that do very well at this. Or, y'know, maybe it's just redundant.)

What English handles as a relative clause, "apes who do not possess intelligence", Japanese handles by effectively turning the verb "not possess intelligence" into an adjective. To say "we are apes who are dumb and do not possess intelligence" in Japanese, you say something along the lines of "we are dumb and non-intelligence-possessing apes", which is clearly the origin of the T-shirt.

Of course, there's also the complete nonsense, e.g. "I smell the smelly smell of something that smells smell", and the grammatically-correct-but-thematically-inappropriate text, a la this classic example.

Monday, December 3, 2007

So, where do animal protocols come from?

My dad asked, in a comment to "My Experience with Lab Mice", why the accepted protocol is to gas them with CO2 instead of nitrogen. He writes:
Anoxia through O2 starvation is demonstrably painless. Many research pilots go through it, to the point of unconsciousness, and the general comment on recovery is "Did something happen?"
CO2 overdose triggers the breath reflex, which O2 starvation does not, at least in humans.

I did a fair bit of searching, but all I could find was protocols describing various euthanasia methods. I couldn't find anything that motivated them. Sure, I found some discussion of why toe clipping is discouraged, but that's because toe clipping isn't The Method for identifying mice (anymore). CO2 gassing appears to be one of The Methods, if not The Method, for euthanizing a bunch of mice -- hence, not much debate or discussion.
I don't know how many readers I've got, but I'm throwing the question open: who decides what the `official' animal protocols are? (Does it differ if you're a university vs. a company?) How do new methods get invented, approved, and adopted? What happens when an alternative method gets officially discouraged/banned? Where can one go to find out all this information about a specific protocol? And if it's the case that O2 starvation by N2 surfeit is painless, why isn't it The Method?

Gmail, how I love thee

Forgive me for taking a moment to extol the virtues of Gmail. I appreciate it a lot more since coming to MIT, because I get about ten times as much email as I did in high school. And no, I'm not getting paid for this. But people here complain a lot about the volume of email, and if more people would use Gmail then the amount of sniping would drop five-fold.

Gmail has tags instead of folders. Tags are so much better than folders! I have two major ways I categorize my email: by where it was sent to (me directly, my MIT email account, certain mailing lists) and by why I should save it (it points to a resource, it contains someone's contact info, it's one of those "please retain this email for your records" messages). So, to a first approximation, pretty much all the messages I save falls somewhere on a two-dimensional grid. A lot of messages fall in more than one place, and there are miscellaneous tags, and all kinds of stuff you just can't do with folders.

(I'm always a little annoyed by blog platforms that say things like "This entry filed in [list of tags]". They're just trying to pretend tags are the same as folders, no idea why. Inertia perhaps? Bah! Embrace the tags for what they are!)

Since Gmail has tags instead of folders, to get things out of your inbox you just hit "Archive". This puts the conversation in a big all-purpose bin, and you can find it later by searching for text, tags, or sender.

There's also a really nice filtering setup. For instance, I have all my email from my MIT address forward to my Gmail, and I have Gmail tag it `MITmail' so I know what was sent to where. I subscribe to a lot of mailing lists, and I have many of those automatically tagged as well. One list in particular, the Reuse list (for recycling / handing off old computers, furniture, unneeded coupons, etc etc etc...), gets a lot of traffic, and I don't generally want to see Reuse messages in my inbox unless I've got free time to go pick something up. So I have Gmail do two things: tag the messages `reuse', and archive them directly, so they don't appear in my inbox and get in the way of more important things.

On top of tags, you can also mark converstions (or individual messages) with a little yellow star that shows up by the subject line. This is really helpful because I keep several types of messages in my inbox (instead of archiving them): reminders for events, reminders for things I need to do, and messages I'll need to refer to within the next two weeks or so. Reminders get starred, so they stick out visually and I actually get reminded of them. References don't get starred because I only need them when I'm looking for them specifically.

Probably the best feature, though, is this: Back-and-forth emails with the same subject line are organized into conversations. This is just fantastic. It keeps everything related to one topic in one place, instead of having individual messages in several threads scattered randomly throughout your inbox. You can read an entire thread on one page. If a new message appears while you're backreading the thread, Gmail pops up a little "Update Conversation" box so you don't reply redundantly. And since the whole thread is on one page, Gmail hides each message's quoted text, all the lines that begin with piles of >>>s. And you can delete an entire flamewar with one click. Can't beat that.

I know a lot of people are concerned about the direction Google is headed, or that they will end up "owning all the information in the world". Yes, there are legitimate concerns, but I think the danger is way overblown, and Gmail seriously saves me a lot of time and aggravation. (Are you listening, stupid Yahoo email account that I keep for signing up for potentially spammy things? Grr.)

Ping!...Pong!...

Many, many thanks to Coturnix, of A Blog Around The Clock, for linking here!

I realize this happened several days ago. I plead guilty of falling out of the habit of checking my Technorati page because nothing ever happened on it.

And, I may as well take this opportunity to apologize for the recent lack of activity. Homework is no excuse; there are lots of people out there who are busier than MIT undergrads (though you'd have a hard time getting some of my friends to agree to that)

Monday, November 26, 2007

[Links Dump] Braille and lolpostmodernism

Alpha Phi Omega goes to the Braille Press, sticking transparent braille-letter stickers on children's books so that sighted and blind can read together. What a simple, elegant, and effective method!
(I've harbored dreams of becoming a Braille transcriber, but that takes a lot of practice and training, and the Braille Press volunteer work is something anyone can do right away.)
(Also, yes, I am planning to pledge APO next semester, which is now very soon. It's kind of a stupid story why I didn't pledge this semester.)

Meanwhile, Jeph, author of the fabulous webcomic Questionable Content, offers up a great selection of postmodern cat macros and other-noun macros.

Tuesday, November 20, 2007

Who Pays For My Food

One of the great things about being at a major research institution is that you can always find work as a guinea pig. There are flyers all over the place, advertising this or that experiment, usually for $10 an hour, which isn't half bad for one-time easy unskilled work. I've been a subject in a bunch of different neuro/psych experiments, and it actually does a decent job of paying for my food. It's not steady work, but there are benefits to that -- I don't have to commit to anything major, it's not very many hours a week, I can schedule it around my own schedule, I get to look at what lots of different labs are doing.

Don't worry, I'm not imbibing dangerous substances while guys in lab coats watch to see if I break out in a rash or something. Most of what I've done involves looking at a screen and pressing buttons. The most biologically involved thing I've done is get fMRI'd.

I like to think of guinea pigging as "learning about experimental methodology from the inside". Reading the Materials & Methods section of a paper is really boring, and it's often hard to get an idea of what an experiment was actually like. Now that I've been through a bunch of experiments, I'm developing a sense for what a protocol will feel like from the subject's point of view, and I know something about how to design an experiment so it isn't agonizingly boring. I've also learned a bit about recruiting subjects. All of this, I hope, will come in useful when I start doing my own research, if I'm using human subjects. Or maybe even if I'm using monkey subjects -- they deserve a workable interface too, after all.

I have done a negotiation roleplay while thinking I was on caffeine. I have sat and read in a chair for hours while specialized earplugs play sounds in my ears and measure the echoes of my ear canals. I have identified grey shapes on colored backgrounds as quickly as I possibly can. I have described upwards of a hundred superballs. I have lain in an fMRI for two hours watching sentences full of made-up words. I have had my auditory and tactile thresholds tested and retested and re-retested.

(By the way, getting fMRI'd was actually quite nice. You're in the machine for two hours, and they want you to move as little as possible, so they take pains to make you comfortable, with padded head restraints and a foam block under your knees and a blanket. It was so comfortable that, after a while, it felt like my body was disappearing because it wasn't sending my brain any discomfort signals. Which was incredibly relaxing, but also trippy.)

The most exciting thing, though, has been when I'm actually interested in what the experiment was about, and when I can make contacts this way. A couple weeks ago, I spent a Saturday doing loads of different tests in a psycholinguistics lab. One of the experiments had me listening to ten minutes of sentences in a simplistic constructed language, trying to decipher where the word boundaries were. (Sound easy? Listen to ten minutes of sentences in a language you're not familiar with, then come back and tell me with a straight face that it didn't sound like a continuous stream of speech.) I was surprised to find a conlang being used in a serious research context, since I'd been given to understand that most linguists are dismissive of conlangs and conlanging in general. But I asked, and we got into a neat discussion, and then I asked if they were looking for undergrads, and they said yes! Exciting! More on this in a subsequent post.

Sunday, November 18, 2007

Followup: Toe clipping and humane treatment

I was looking for a diagram of the toe clipping code I described in the previous post. Apparently, though, toe clipping is no longer considered an acceptable method of marking animals, except in unusual circumstances that preclude using other methods. The 2004 edition of Current Protocols in Neuroscience says:
"Lifetime identification of rodents has traditionally been accomplished by coded digital amputation (“toe clipping”); however, this procedure is considered by many to be inhumane and ethically unjustifiable except under special circumstances."

And according to Cornell's document on the subject, you need to get the approval of the Institutional Animal Care and Use Committee (IACUC) before you can use toe clipping, and there are all kinds of restrictions -- age of the animal, total number of toes clipped, etc etc.

I don't know why it didn't occur to me, back in 2005, that there must be other methods of identifying animals. Now that I've looked at a bunch of documents, it seems so obvious. Ear tagging, tattooing, metal rings, reusable microchips...there's all kinds of stuff out there. It's kind of nice to see that new methods are being invented and popularized, and (hopefully) driving out the less humane methods that used to be the norm. Especially for something as routine and universally necessary as numbering your animals.

I still wish it were possible to keep mice in less crowded conditions, in cages that actually let them have a life. Would you like to have lived your entire life in a featureless environment the size of a couple of handicapped bathroom stalls, eating one kind of food and scrapping with your siblings? But keeping animals is already expensive, and if there's only so much money to be spent on improving the daily-life conditions of lab animals, I'd rather it were spent on chimpanzees than mice.

[LJ Repost] My experience with lab mice

Recently, I visited my friend Rachael. We were talking about our respective summer activities, and since I was working in a neurobiology lab at the time, the topic of animal research came up. Rachael is many things, including (1) highly empathetic; (2) hyperenthusiastic about animals and the environment; (3) very protective of, and caring for, the same. The Rachael I knew back in elementary school probably would have had a fit at the thought that her friend could be involved in something as barbaric as animal research. But we're not eight years old anymore, and instead of going on a long animal-rights diatribe, she just asked me how I dealt with it. I couldn't really give a good answer in realtime, without a chance to think through it. I said something along the lines of "I just push it aside. Yes, it's disturbing, but we try to be as humane as possible, and I push it aside."

It is difficult to deal with the suffering and death of research animals.

I've spent two summers under grad students in a certain neurobiology lab at Stanford. Luckily, the only organisms we worked with were mice and bacteria. (And do I ever feel sorry for the people who work with monkeys.) This post will be about mice. Bacteria are not worth discussing.

Actually, the first time I worked with mice was the summer/fall of 2005, when I interned in a different neurobiology lab, under a scientist named Helen. I only went in once a week, and I hadn't taken biology yet so I hardly knew anything about what we were studying, couldn't really make a substantive contribution...blah blah blah. I'd had pet hamsters before, but that was the first time I had to deal with mice in a research setting.

I remember how freaked out I was the first time I watched Helen handling the mice. To pick them up, you pull them up by the tails and then scruff them. To obtain a very small tissue sample for genotyping, you cut off a bit less than a millimeter of their tail, with a razor blade. To give them each a unique identifying number, you use a predefined code that involves cutting off toes (and sometimes punching holes in ears, though we didn't use that). The mice would get frightened, squeak, writhe, urinate, try to escape, try to bite, and eventually, bleed. Each mouse didn't bleed much, but I was quite surprised at the amount left on the workspace when we finished numbering and tailcutting a couple cages' worth of mice.

Thankfully, that was as much as I had to do while working with Helen. I did do some sectioning and staining of brains, but they had already been dissected out, and I was able to think of them as just tissue. Meat. I suppose I knew, academically, that someone had had to kill mice and dissect them to obtain these brains, but I was one step removed from that and it didn't really hit me. They were just excised brains, just grey lumps in a dish; they didn't look at me with cute little button eyes and squeal, demanding to be put back in their cage with all their paws intact.

At one point, Helen and I walked past a guy who was doing a more elaborate procedure. The mouse was on its back on a styrofoam block, pinned by the paws -- essentially crucified flat. Its chest was cut wide open, and I could see its heart beating. There was blood everywhere, and there were lots of tubes going back and forth from the mouse to this little whirring apparatus in a corner. It would have thoroughly blown my mind, but I didn't catch more than a brief glimpse. Anyway, except for that, my mouse involvement was relatively pain-free in that lab.

I got a bit of a shock when I started in the second lab, in summer 2006 (working with a grad student named John). We were doing a project that involved acutely isolating cortical astrocytes. Translation: killing young mice, dissecting out their cerebral cortices, and processing these in various complicated ways to finally end up with an isolate of single cells of a certain type. That procedure (`prep' for short) takes pretty much all day.

Hah, I'd thought tailcutting and toecutting were bad? We were working with very young mice, recently weaned or not yet weaned. I think the age range was roughly from 1 to 10 days. They were still pink and didn't even have their eyes open yet. With adult mice, you gas them first, but to kill a young mouse, you behead it with scissors. (I think the protocols are similar for rats, except sometimes you use this guillotine-looking thing. I never used that, or saw it used.)

You behead the mouse with scissors, with about inch-long blades. The body twitches, and the head falls onto the table, and it twitches too. Blood wells and drips out of the body, reddening about a square inch of the absorbent pad you do dissections on. (You get quite a bit more blood from an adult mouse.) You discard the body in a biohazard bag. Meanwhile, the head is sitting on the table, looking for all the world like a live mouse, except there's empty space behind its neck. The jaw opens and shuts, which makes the head rock back and forth. It takes about ten seconds to go quiescent, and then you unroof the skull and dissect out the cortex.

Sometimes, when you put the scissors up to their neck, they squeal and put their little hands up, and grasp the blade, and you can't behead them without cutting off their fingers.

The first time I watched John do it, and the first time I did it myself.....I'd like to say I felt faint, or nearly threw up, or something overt like that. But I just felt a deep sort of quiet horror that didn't lend itself to being expressed that way. I didn't feel a physiological effect, like faintness or nausea; just quiet horror and mental revulsion. But I wondered, what was wrong with me? Why wasn't I more upset? As unpleasant as the experience was, I wanted it to feel more unpleasant. I didn't want to feel the beginnings of numbness and desensitization.

I did an awful lot of preps that summer, about two per week on the average. And don't get me wrong, I enjoyed most of the tissue culture work, the part where you're working with bits of tissue or cells in a tube, instead of with (mostly-)entire animals. The summer's work as a whole was fun and challenging and rewarding and all sorts of other awesome adjectives. But I was doing two preps a week, often under tight time constraints, so I didn't spend much time hyperventilating over dead mouse pups. I didn't actually become numb, but I grew much better able to put the horror aside quickly and move on to the next thing that needed to be done. And then after five minutes, the disturbing part was over, and there were just tissue bits in a dish. Meat.

One thing that helped was perverse, macabre humor. It made me realize anew what awful things we had to do, while helping me cope and smile. I guess everybody got into the black humor to some extent -- I sure wasn't the one who put up the "Dr. Kevorkian wants YOU to keep the euthanasia area clean" poster. A lot of people were in the habit of referring to older pups as `pupcorn', because they would jump around enthusiastically when you opened the cage, and they could easily jump high enough to escape if you weren't careful. But my favorite was a random thing that happened to me. Immediately after beheading the pup, you cut away the skin on top of its head using small scissors. John's small scissors squeaked. The first time I used them, I jumped about a mile all "OMG OMG OMG it's still alive it squeaked WTF OMG AAAAAAAACCCCKK", until I realized that the mouse head had no vocal tract, so it couldn't possibly be squeaking. After that, I could always laugh at myself during a dissection.

This summer (2007), I was back in the same lab, working with a different grad student (Lynette). I didn't have much trouble re-acclimating to the lab, the preps, and the mice dissection. The deep breathing, the numbing, and the humor -- all that kept working, thankfully.

I watched another student do a perfusion, which involves pumping out all a mouse's blood and replacing it with saline solution, then pumping that out and replacing it with preservative. This turned out to be the same procedure that I saw back when I was with Helen, with the mouse crucified, heart beating, all that. I watched a good part of it. Got a little nauseous, but not really that much. I knew the mouse was unconscious and completely unaware of what was happening to it; everyone takes great care to make sure that the animals are well and completely anaesthetized. Sure, it bothered me, but not overly much; and pretty soon the spinal cord was being extracted, and it was back to the "it's just tissue" stage.

I watched Lynette euthanize four cages of mice. (I participated obliquely, by carrying cages and suchlike; I passed up the offer to participate directly.) We put a cage into a gas chamber and turned on the CO2. The mice gradually slowed down and got quiescent, and eventually they went from asleep to dead. You could tell they were dead when their eyes turned green. After gassing the mice, you had to verify that they were dead. This meant holding the tail with one hand, and pinching the neck with the other, and breaking their necks. I passed up on the offer to try my hand at this, though I had the feeling I probably should have gone for it.

While I watched the euthanasia, I tried not to block it out or push it aside. I wanted to feel horror and revulsion. I wanted to be upset. I made a point of looking at their faces when Lynette said she always tried to avoid it. Things like that. I wanted to make sure I hadn't gone numb.

The next time we had to euthanize mice, we might have had as many as 50, all piled in a cage. Normally, when you're CO2ing them, they just sort of get sleepy, move slower, and then go unconscious. This time, about 3/4 of the way there, a lot of them suddenly moved/twitched at the same time. I don't know if it was coincidental, but it was scary. I participated concretely this time, by breaking the necks of the last six or so. They're so small and soft and fragile, and their necks break so easily. It's easier than breaking a toothpick.

It's not arbitrary. These mice are killed because they are not useful. They don't carry enough copies of the mutation we need. Some of the mothers get in the habit of having litters and then eating them. (By the way, that's a natural mechanism: when the environment is lousy for raising pups, the mother will eat them to conserve protein, protein being not exactly abundant in a mouse's natural diet.) Litters are the wrong age at inconvenient times. One of the females was pregnant, but the pups would be too old by the time anyone could do anything useful with them, just because of everyone's schedules. No one keeps extra mice around for their own sake. It's a hassle to take care of them, and it wastes funding better spent on expensive experimental apparatus and reagents. I accept all of that.

(I would have offered to adopt them, but I'd asked about that at Helen's lab. For one thing, mice with poorly-understood mutations make dangerous pets. For another, it's a bad idea for mouse researchers to keep pet mice of any kind, in case the pet mice get a disease and the researcher carries it into the mouse colonies at the lab. Apart from all of that, the sheer volume of mice involved would be prohibitive.)

Relatively speaking, I felt OK killing mice for a prep, because they were dying for a reason, and their cells were being put to good use. Even part of them lived on; though we killed the mice, we spent every effort coaxing their glial cells to grow and thrive. They were contributing to science, and might one day contribute to human medicine. But the euthanasia was useless and pointless, and that was what got me. These mice were not being used for anything. They were just extra, and there was no room for keeping extra mice.

Fortunately, there isn't any kind of machoness culture around this in the lab. You don't get ridiculed for being upset about the animals suffering and dying. At most, you get a few odd looks, and a suddenly solicitous mentor. We spent half an hour discussing how badly we felt for the people in another lab who had to kill a monkey. They had known this monkey, had worked with it for months, had trained it to do things, had even named it. And now they had to kill it and extract its brain. We were very glad we were not them.

This post is a combination of two LJ posts. Here are the comments to the first and second original posts.

[LJ Repost] War on The Cult of Genius, The Cult of Theory, and The Cult of Not Biology

I started working on this post a loooooong time ago -- back in February of this year, when dinosaurs roamed the earth. I wrote up about 4/5 of it, was called away, and forgot to ever return to it. So yes, the blog posts I'm linking to are several months out of date, and I'm sure the discussion progressed quite nicely without me. But it's not like this issue will go away anytime soon.

First salvo fired by Julianne Dalcanton of Cosmic Variance. She attacks a misconception in the physics community: if physics is actually difficult for you, if you're not Feynman-Einstein-Hawking smart, you are pretty much worthless as a physicist. You are only fit to do low-energy, experimental, or otherwise `lowly' work. You would be better off spending your time teaching more sections of freshman mechanics.

Wrong.

Apparently this misconception is unfortunately very widespread among physics people at all levels, and leads to talent drainage as people decide they just don't have what it takes, and head off to some easier field. The vast majority of useful physics work is done by people who aren't off-the-charts geniuses (and this is true to a lesser extent even of the revolutions that individual geniuses catalyze; Einstein was nothing like solely responsible for the theory of relativity). Physics is hard, and if it's difficult for you that doesn't mean you're stupid or unworthy. Welcome to scientific inquiry.

Score one for sanity. (Do yourself a service and go read the original entry; my summary doesn't nearly do it justice.)

Second salvo fired by Chad Orzel over at Uncertain Principles. He attacks the misconception that there is a Great Chain of Being in the physics department, and the more theoretical your work, the higher you rank. Low-energy experimentalists are right down there with biologists (gasp!). You're stupid if you have a hard time with algebraic topology, or if you spend a lot of time fine-tuning apparatus instead of grandly theorizing about the universe.

Wrong.

This misconception is also widespread. Supposedly, the farther removed your work is from `reality', the harder it is. As Orzel points out, a lot of the most difficult work is in experiment, where you HAVE to pay attention to reality. None of this "setting inconvenient constants equal to 1". A lot more of the most difficult work is in integrating theory with reality. Level of abstract or mathematical content (which does correlate with incomprehensibility) does not determine value; not even close. Also, what theorist could come up with this clever use of post-it notes?

[I'll add the following to Orzel's points: One of the things that a prof I've worked under likes about biology is that Experience Matters. This is true at all levels. It takes a fine hand, lots of practice, and an acquired intuition for how reagents/cells/tissues/animals behave, to do complicated procedures properly and get good results. And when you get curious results, it takes experience (and a good mental database of papers) to think of good reasons why that result happened, and especially to think of what followup experiments to do. It seems like this should be true of most experimental work, though I only have direct experience in cell-bio and linguistics.]

Score two for sanity. (Again, do yourself a service and go read the original entry.)

I will attempt to fire a third salvo, though (a) I'm not third, more like twentieth, especially if you count comments discussion and (b) I don't have enough experience for my contribution to be worthy of the title "salvo".

*cracks knuckles*

I declare that there is no Great Ladder of Scientists, going biologist <<< chemist <<< physicist <<< mathematician. Further, there is also no Great Ladder of Biologists, going ecology-level <<< organism-level <<< cell-level <<< molecular. Generally, height on these ladders is associated with abstraction, level/volume of math involved, and smallness of what you study. That, too, is plain wrong.

Sure, it's harder to visualize molecules bouncing around and reacting than it is to visualize zebras bouncing around and getting eaten by lions. That's not the point. Molecules may be smaller, but zebras, by virtue of being made of zillions of the most complex molecules in existence, are complicated. The bigger and more biological the entities you study, the more processes are going on at once. Molecular interactions are hard to model because we don't have an intuition for how things behave at that microscopic level (weirdly). Zebra interactions are hard to model because there are so many variables, and the same is true of organs, and tissues, and cells, and etc.

Also, it's often said that microscopic work is difficult because gut instincts are wrong. That's a fair point. Instincts aren't supposed to be 'right', they're supposed to keep you alive and breeding. And yes, it is difficult to imagine quantum particles going around doing their quantum thing, because "their quantum thing" is so at odds with our daily experience. But the same thing is true of molecular biology. If you think about proteins going around in a cell and reacting with each other, it's likely to play out like a stately dance in your mental theater. In reality, there's an awful lot of aimless random wandering, mistakes, and awkwardness between proteins. It's less of a symphony and more of an enthusiastic but unprofessional pub-session. In ecology or meteorology, gut instinct fails just because of the huge surfeit of variables and random factors. In cognitive science, gut instinct fails because we're not optimized to understand ourselves, and especially because everyone's used to computers, which don't work very much like brains. Etc, etc.....gut instinct fails for different reasons in different fields.

It's worth admitting that some sciences are younger than others, and the easy problems get solved first. The difficulty/scale of a field's "big problems" will be proportional to some function of how many scientist-hours have been poured into that field. But that doesn't mean new fields are inherently easier (or harder); it just means they're new. In several years they'll be at about the same level of difficulty as the fields people have been pursuing since they could walk.

The fourth salvo, "humanities people do not rank just below worms", is left as an exercise for the reader.

Here are the comments from the original LJ post.

Saturday, November 17, 2007

Let's get down to business

What can you expect to find here? I intend this blog to be a serious science blog, one that'll challenge me and have me writing at the top of my game, and hopefully one that will interest and inform you as well. Like I said earlier, take a spin through ScienceBlogs to see who I'm emulating.

I plan to mostly talk about science. This includes interesting new papers, random neat things I encounter in class, what I'm doing research-wise. I don't mean that this blog is supposed to be entirely Serious Business (tm), but I'm aiming for `mostly'.

The first several posts will probably just be reposts of the best content from my LiveJournal, the ones I feel deserve to be in a more serious context and not rubbing shoulders with "OMG this homework is so stressful, eek".

F1RST P0ST!!!

Welcome to the Dendritic Arbor! I'm Kelly, aka Alioth, currently a (relatively) bright-eyed and bushy-tailed freshman at MIT. I'm interested in all sorts of fields, primarily the ones you see in the blog description. I haven't declared a major yet, but I know I want to go on to grad school and then academic research, and I have a fairly good idea of what sort of research I want to do. So at the moment I'm debating whether to officially major in Bioengineering, Brain&Cognitive Sciences, or (most likely) some combination of the two.

And why, you ask, am I blogging? There are several reasons: an excuse to procrastinate, because it's neat to be able to subscribe to my own RSS feed, because I think it's deplorable that there isn't already a blog called "The Dendritic Arbor", because I read so many blogs that I'm starting to think this blogging lark is something I can actually do (ha)...lots of bad reasons.

But what's the good reason? I've been maintaining a LiveJournal since early in high school, so I have some little experience at blogging. It's pretty thorough, but the vast majority of the content is just me blathering about my personal life, which is not at all interesting to people who don't know me. Occasionally, though, I've made a post that I thought was really worth reading. Perhaps this is arrogant, but I've even regretted that those posts get little traffic, and have guilt-by-association with LiveJournal, the forum of stereotypical pretentious emo teenagers. I like to think I grew out of that when I graduated high school, lo these many months ago.

On top of that, my daily readings page is largely populated by the good folk over at ScienceBlogs. I greatly admire their work, and for a while now I've had an urge to emulate them. Of course, I'm just a first-term undergrad and I naturally don't expect to do nearly as well as grad students, full tenured professors, and big shots from PLoS. We'll see what comes of this little venture of mine.