Forty-nine years ago. This tribute is reposted from last year:
Forty-nine years ago. This tribute is reposted from last year:
I have blogged earlier about the book by neuroscientist Sandra Aamodt and have discussed there in passing the pioneering work by Weizmann Institute scientists Eran Segal and Eran Elinav on the individual microbiome (our “gut bacteria population”) and how it affects blood sugar levels. Now the duo has teamed up with editor Eve Adamson, and together they have put out a popularized book:
I am familiar with some of the original papers in top scientific journals—the book is of course much more readable, and the authors and editors have done a good job of presenting their work in lay language while preserving the broad strokes of their work.
The bottom line of their research is this: each of us carries a whole ecosystem of bacteria in our intestines, which help us digest and absorb food. The specific mix of bacteria varies between individuals, and hence so do our responses to different foods. While weight gain/loss is best seen as an outcome—one aspect of overall health—glycemic response, the changes in blood sugar levels after a meal (“postprandial glucose response”) are sufficiently rapid that they can be monitored in real time (e.g. with a continuous glucose monitor) and correlated with what the person ate (logged in a smartphone app). Doing this for thousands of people is a big-data project par excellence, and this is how computer scientist Segal teamed up with gastroenterologist Elinav.
But this isn’t where it ends. Gut bacteria populations can of course be obtained from stool samples, and subjected to analysis—another aspect of the massive big-data puzzle. Moreover, some of what they infer from the data can be checked in an animal model—for instance, certain gut bacteria can be administered to sterile mice and their weight gain (or lack thereof) in response to certain food mixtures tested on a much shorter time scale than would be possible in slow, relatively large, and long-lived mammals like us.
The duo brought different, complementary perspectives to the problem, not just scientifically but personally. Elinav always loved to take machines apart and see how they fit together (fittingly, he did his military service aboard a submarine), then became fascinated with living organisms. He ended up studying medicine, then specializing in internal medicine. During his residency, he was exposed to the human suffering caused by “metabolic syndrome” (the term given to the combination of severe obesity, adult-onset diabetes, fatty liver, hyperlipidemia, and the complications thereof). He realized that they spent all their time as doctors dealing with the consequences and complications rather than with the root cause.
Segal, on the other hand, was an avid long-distance runner in his spare time. He started experimenting with different nutritional approaches to improve his endurance as a runner, assisted in this pursuit by his wife, a clinical dietitian. As he dove deeper into this and observed diets of fellow runners, it became increasingly clear to him that there was no one-size-fits-all, and that recommendations that were held to be gospel truth (or Torah from Sinai, in our case) were, in fact, counterproductive for some. Why do some runners who eat dates before a run become energized and others exhausted? Who do some do best with carb-loading, and indeed thrive on high-carb diets, while others quickly pack on the pounds and suffer from low energy?
Segal was already involved in the computational study of the human genome at the time and then started reading about the emergent field of study of the microbiome. One thing led to another, a mutual acquaintance put Segal and Elinav in touch with each other, and together they embarked on the collaboration that eventually morphed into the personalized nutrition project.
One factor that facilitated their research was that rapid, reliable, and minimally-invasive blood glucose monitoring technology has become relatively inexpensive. And here some of their first surprises came. Anybody who has followed a Gary Taubes-type diet, or who is trying to manage diabetes, is aware of the ‘glycemic index’ (GI) of foods—the increase in blood sugar levels caused by eating a given amount of the food, compared to the same amount of pure glucose (for which GI=100 by definition). But how uniform are these values really?
Segal and Elinav found that the GI for some foods (e.g., bananas) differed very little between their test subjects (say, 60-65), while others (e.g., apples) were all over the place (40-90). Moreover, the variation was not random but correlated with the person.
One would expect glycemic response to go up more or less linearly with the amount of the food consumed was a given. They found that this is indeed true for smaller amounts, but at some point saturation sets in as the body manufactures more insulin, and the glucose response levels off. (This, of course, does not mean you can just eat ten times as much: the insulin will cause the excess energy to be stored as fat!)
More surprising, however, was that higher fat content in the meal on average caused a minor decrease in glycemic response. For a nontrivial number of their participants, eating toast with butter or olive oil actually did less glycemic harm than eating the toast on its own.
Now trying to keep blood sugar levels on a more even keel has two major benefits. In the short term, yo-yoing blood sugar levels lead to a reduction in energy, a feeling of exhaustion as the body pumps out insulin in response to a sugar spike and blood sugar dips. As for the long term: Segal and Elinav found across their sample that glycemic response after habitual meals is strongly correlated with BMI. Keeping blood sugar levels on a more even keel turns out to be a win-win on all counts.
And here’s the catch—”thanks” to our microbiome, glycemic response is highly individual. Segal himself ‘spikes’ after eating rice, while Elinav does not. One person spikes after ice cream, while another does not—and the same person who spikes after an evening snack of ice cream can safely have chocolate instead, go figure.
This addresses a seeming paradox. It’s not that diets don’t work—in fact, many do for some people, though long-term compliance can be an issue—it’s that there is no diet that will work for everyone, or even for most people.
So the next step, then, was to have a computer analyze the data for some of the participants in depth, and have it plan out a personalized diet that would keep blood sugar levels as steady as possible for that patient. Guess what? Yup, you guessed it.
Now some people might be discouraged by the idea of carrying around a blood sugar monitor for two weeks and carefully logging every meal (and physical activity). But once a large enough dataset has been established, and correlated to analyses of the gut flora composition in all the test persons, it becomes possible to predict glycemic responses to different foods with reasonable accuracy based on a bacterial population analysis of stool samples. A startup company named DayTwo is offering to do exactly that. [Full disclosure: I have no financial interest in DayTwo or in any of Drs. Segal and Elinav’s ventures.]
We are at the dawn of a major revolution in healthcare—a shift away from a paradigm of statistical averages to one of detailed monitoring of individual patients. Call it ‘personalized medicine’ or any other buzzword: it does seem poised to radically change healthcare and individual health outcomes for the better.
Sandra Aamodt, the former editor of Nature Neuroscience, presents a TED talk where she explains something counterintuitive: not only do most diets fail to achieve permanent weight loss, but in some cases the rebound actually overshoots, and the diet actually causes a weight gain in the long run.
As she describes it: the hypothalamus of the brain acts as a kind of ‘weight thermostat’ (that would be a barostat? :)) that tries to adjust body weight to within about 10-15 lb of a set weight by sending chemical signals that up- or down-regulate appetite, that speed up or slow down metabolism, etc. If weight drops “too” far below the set point, signals to increase food intake are sent out, and if no food intake ensues (because no food is available, or because the person is dieting), then metabolism is slowed down to reduce the base metabolic rate (i.e., the number of calories your body needs to keep basic functions going at rest). Unfortunately, the “set point” can be ratcheted up but not trivially ratcheted down.
People who think it is all about the pounds (or about the BMI) will find this a depressing message. But this is a classic example of the “reductionist fallacy”: weight or BMI are but. one metric of health among many. There are many others that matter, such as percentage muscle mass, blood sugar at rest, blood pressure, cholesterol, blood oxygen levels,… A person who is technically overweight (i.e., BMI between 25 and 30) but eats healthily, exercises at least 3 times a week, does not smoke, and only drinks in moderation actually has a better health prognosis than somebody who has an “ideal” weight (BMI around 20) but smokes and drinks heavily and never does any exercise.
To be sure, she shows that among people who do not have any of these four healthy habits, an obese person (BMI=30 or higher) has seven times the mortality risk of somebody with an ideal BMI=20.oo. However, for those who do observe all four healthy habits, the mortality risks with normal, overweight, and obese patient differ only by statistical uncertainty.
Does that mean that a morbidly obese person who cannot fit in an airplane seat does not need to go on a diet? Of course, it doesn’t — that is a straw man, and “set point” normally don’t go that high unless pushed there by unhealthy habits or regular binge eating.
But somebody who, well, has a naturally zaftig built is probably better off making a fixed habit of exercise, and to eat ‘smart’, than to go on some extreme low-carb diet. (Full disclosure: I do restrict my carbohydrate intake, but not all the way down to “ketogenic”.)
There is an additional factor here: in recent years we are increasingly aware of the role the microbiome (“gut bacteria”) plays in food absorption, and particularly in sugar absorption. For instance, in this very recent paper: http://dx.doi.org/10.1016/j.cmet.2017.05.002
ABSTRACT: Bread is consumed daily by billions of people, yet evidence regarding its clinical effects is contradicting. Here, we performed a randomized crossover trial of two 1-week-long dietary interventions comprising consumption of either traditionally made sourdough- leavened whole-grain bread or industrially made white bread. We found no significant differential effects of bread type on multiple clinical parameters. The gut microbiota composition remained person specific throughout this trial and was generally resilient to the intervention. We demonstrate statistically significant interpersonal variability in the glycemic response to different bread types, suggesting that the lack of phenotypic difference between the bread types stems from a person-specific effect. We further show that the type of bread that induces the lower glycemic response in each person can be predicted based solely on microbiome data prior to the intervention. Together, we present marked personalization in both bread metabolism and the gut microbiome, suggesting that understanding dietary effects requires integration of person-specific factors.
We are only beginning to understand how human digestion, food absorption, metabolism, and the microbiome interact. Eventually, genome analysis combined with microbiomics will bring us into the personalized nutrition era.
UPDATE: from the same team, a 2014 paper showing that artificial sweeteners induce glucose intolerance by altering the microbiome. NATURE’s editorial summary in lay language:
We have been using non-caloric artificial sweeteners for more than a century. Today the food industry is using them in ever-greater quantities in ‘diet’ foodstuffs and they are recommended for weight loss and for individuals with glucose intolerance and type 2 diabetes mellitus. Eran Elinav and colleagues show that consumption of the three most commonly used non-caloric artificial sweeteners saccharin, sucralose and aspartame directly induces a propensity for obesity and glucose intolerance in mice. These effects are mediated by changes in the composition and function of the intestinal microbiota; deleterious metabolic effects can be transferred to germ-free mice by faecal transplantation and can be abrogated by antibiotic treatment. The authors demonstrate that artificial sweeteners can induce dysbiosis and glucose intolerance in healthy human subjects, and suggest that it may be necessary to develop new nutritional strategies tailored to the individual and to variations in the gut microbiota.
In honor of the holiday (Christmas if you’re a Western Communion Christian, Isaac Newton Day for everyone else), our Beautiful but Evil Space Mistress has a post up about “living in the light”. She mentions some of the more tasteful and tacky Christmas decorations in her neighborhood, but particularly the abundance of light. (Note that all major winter festivals involve light — be it the pagan Julfest, Christian Christmas, or the Jewish Chanukah/Festival of Lights.)
Our BbESM grew up outside Porto, Portugal, with a single 60W incandescent bulb hanging off the ceiling of her room, plus a 30W lampshade — and even that was a luxury by historical standards. In fact, her editor notes that, adjusted for inflation, a given amount of luminosity has gotten a whopping 500,000 times cheaper in the past few centuries. Just in the past few decades alone, we’ve gone from 60W incandescent to 8 W LED for the same luminosity.
Sarah also notes that she suffered from mild SAD (seasonal affective disorder) and hence appreciated the light. Now actually, while incandescents (with their very “reddish” light — not to mention most of their energy output being infrared, i.e., heat) are probably still better than darkness, they do not help a whole lot with SAD except at very high luminosities. Why?
We actually have three types of photoreceptors: rod cells, cone cells in three colors, and ipRGCs (intrinsically photosensitive retinal ganglion cells). The absorption maxima of rod cells (night vision) and cone cells (daytime color vision) are illustrated below:
(Fish and birds have a fourth “color” of cones in the near-ultraviolet region, with an absorption maximum around 370 nm.)
The ipRGC’s task, on the other hand, is not vision per se but the regulation of circadian rhythm. Their pigment, melanopsin, has an absorption maximum around 480nm, in the bluish region. (Mutations in the gene that expresses melanopsin are one cause for SAD.) SAD is a major issue in arctic countries (close to 10% of the population in Finland, for example). The traditional treatment (review article here) involves full-spectrum lamps at high intensity (10,000 lux and more). However, it was recently found that blue-enriched light sources at more modest luminosities of 750 lux — or even narrow-band blue light at just 100 lux — yield equally good results, as they selectively stimulate the ipRGCs.
Merry Christmas, happy belated Chanukah/Festival of Lights, or happy Isaac Newton Day, as applicable!
The classic Beatles song, “A Hard Day’s Night”, opens with a complex ringing chord that has had songbooks (and musicians) arguing among themselves for decades. Complicating the answer is that even Paul McCartney can’t exactly remember what was done.
Full disclosure: I relate to the Beatles much the way I relate to Mozart: I recognize their musical genius but much of their most popular music does not ‘move’ me either intellectually or emotionally. But I love a good musical puzzle as much as can be.
In principle, given modern computer technology, the problem of transcribing a piece of music should be simple: digitize the audio, carry out a Fourier analysis, and convert the resulting frequencies to note names. Right?
Well… Feed in unaccompanied flute and this will work fine. (As anybody who’s owned an analog synth knows, a triangle wave is a pretty decent starting point for a flute sounds — and while a triangle does have some harmonics, the fundamental is very strong and there are only odd harmonics so you can tell apart the fundamentals pretty easily from the rest in the Fourier spectrum.) Feed in a Hammond organ with just a single drawbar open: ditto. Feed in a more complex sound but with restricted harmony (e.g., a violin playing only single notes), no problem. Feed in a complex chord played by multiple instruments on top of each other, and things get hairier. Have some of the multiple instruments not be quite in tune, or let some be in equal temperament and others in just intonation, and things gets even worse.
An applied mathematician at Dalhousie University did a Fourier analysis on the opening chord some time ago and turned that into a paper. Does this sound like an academic with too much time on his hands, “partially supported by a grant from the Natural Sciences and Engineering Research Council of Canada,” no less? Well, to me it sounds like a good “torture test” for the robustness of a musical transcription code. And where it comes to science popularization, this definitely hits the spot with the musically minded: only yesterday I saw another popular article about the now a decade old analysis being linked on Instapundit.
Just retaining all frequencies with relative amplitudes above 0.02 still gave him 48 frequencies, from which he squeezed a solution that looks good in theory but just doesn’t sound “quite right”.
A musical transcription site run by somebody with the delightful pseudonym “Waynus of Uranus” points out a fly in the ointment that people who grew up with digital recording wouldn’t even have thought of. Back in the day, loud bass tones meant pushing against the limitations of vinyl singles and lo-fi audio equipment alike, so the deep end of the bass (about 80 Hz and lower) was routinely rolled off with an equalizer or a highpass filter during mixing or mastering. What this means, for example: if Paul were to strike an open D string on his bass guitar (or an A string at the fifth fret) his fundamental would be below the filter cutoff, and the Fourier spectrum would instead have the second harmonic much stronger — leading to claims like “Paul played a D3 and a soft D2 at the same time”. I know bass players like Geddy Lee or Rush or Steve Harris of Iron Maiden play lots of double-stops, but this really is a progressive rock or metal thing to do, not a pop thing.
Applied mathematician Kevin Houston takes it from there and digs further in a very geekish way. While the original record was mono, it turns out there is a stereo mix made for the movie—and in the early days of stereo, it was not unusual for recording engineers to just put some instruments all left and others all right, with the vocal in the center. (This is, pretty much, how I used to jam along with Deep Purple records: Jon Lord’s organ and Ritchie Blackmore’s guitar were usually at opposite end of the stereo image, so you could single out their parts by listening to one stereo channel at a time.) In the stereo
In the stereo mix of AHDN, Paul (bass) and George (12-string guitar) are off to one side, and John (acoustic guitar) off to the other, together with producer George Martin on piano. Better still: after subtracting the left channel from the right (i.e., “phase-inverting”), it becomes clear that the acoustic is playing an Fadd9 chord. (That means: an F major chord with an added ninth, a.k.a. a “Steely Dan chord“. It differs from a major ninth chord F9 in that the seventh is omitted.)
To cut a very long story short (some mathematicians can get quite verbose ;)), this is the solution (which relies on a good dose of Occam’s razor/the Law of Parsimony as well):
Not only does this not require attributing instrumental acrobatics to the Beatles that are out of character for them, but actually playing those notes on the respective instruments does produce a sound quite like the record. (Listen at 7:17 in the video below.)
Kevin and his collaborators could not readily find an electric 12-string, so they simulated that by layering two six-string electric chords: once fretted 1-0-3-2-1-3, the second time 13-12-15-14-1-2 with an extra hand. “Fake Nashville Tuning“, if you like.)
If this isn’t the solution, it sounds much closer than anything else I’ve heard. Enjoy the above video!
From a 1909 speech “Le libre examen en matière scientifique” (Free inquiry in matters of science) by the mathematician, physicist, and philosopher of science Henri Poincaré:
Thought must never submit, neither to a dogma, nor to a party, nor to a passion, nor to an interest, nor to a preconceived idea, nor to anything whatsoever but the facts themselves—since for thought, surrendering means ceasing to exist.
[La pensée ne doit jamais se soumettre, ni à un dogme, ni à un parti, ni à une passion, ni à un interêt, ni à une idée préconcue, ni à quoique ce soit, si ce n’est aux faits eux-mêmes, parce que pour elle, se soumettre, ce serait cesser d’être.]
Storytelling is woven into human DNA. Even the discovery of DNA’s shape is enrobed in a thrilling tale of deceit and betrayal – with a sexist twist, of course. We tell our stories every single day. Some of us are very clearly aware of the delineations between fact and fantasy, and make our living spinning narratives others enjoy reading for the fun of it. Other people lose the boundaries between fiction and their own desires, and that’s where it starts to get, for lack of a better word, problematic.
I would argue that in order to exist in this world full of contradictions, some people must create an insulting narrative to keep them from confronting the harsh realities that surround them. Without that precious blanket (and you may also envision a thumb firmly inserted for sucking on) they might have to face truths they…
View original post 974 more words
The new Google slogan has been unveiled today (hat tip: Marina F.):
For those who have been living under a rock: Google fired an employee for having the temerity to write a memo [draft archived here][full text here via Mark Perry at AEI] questioning the “diversity” (what I call “fauxversity”) and “affirmative action” (i.e., reverse discrimination) policies of the company. Said employee had earlier filed a labor grievance and is taking legal action. Now quite interestingly, here is an article in which four actual experts discuss the science underlying the memo, and basically find it unexceptional even though they do not all agree with the author on its implications. One of them, an evolutionary psychology professor at U. of New Mexico, has the money quote:
Here, I just want to take a step back from the memo controversy, to highlight a paradox at the heart of the ‘equality and diversity’ dogma that dominates American corporate life. The memo didn’t address this paradox directly, but I think it’s implicit in the author’s critique of Google’s diversity programs. This dogma relies on two core assumptions:
- The human sexes and races have exactly the same minds, with precisely identical distributions of traits, aptitudes, interests, and motivations; therefore, any inequalities of outcome in hiring and promotion must be due to systemic sexism and racism;
- The human sexes and races have such radically different minds, backgrounds, perspectives, and insights, that companies must increase their demographic diversity in order to be competitive; any lack of demographic diversity must be due to short-sighted management that favors groupthink.The obvious problem is that these two core assumptions are diametrically opposed.Let me explain. If different groups have minds that are precisely equivalent in every respect, then those minds are functionally interchangeable, and diversity would be irrelevant to corporate competitiveness. For example, take sex differences. The usual rationale for gender diversity in corporate teams is that a balanced, 50/50 sex ratio will keep a team from being dominated by either masculine or feminine styles of thinking, feeling, and communicating. Each sex will counter-balance the other’s quirks. (That makes sense to me, by the way, and is one reason why evolutionary psychologists often value gender diversity in research teams.) But if there are no sex differences in these psychological quirks, counter-balancing would be irrelevant. A 100% female team would function exactly the same as a 50/50 team, which would function the same as a 100% male team. If men are no different from women, then the sex ratio in a team doesn’t matter at any rational business level, and there is no reason to promote gender diversity as a competitive advantage.Likewise, if the races are no different from each other, then the racial mix of a company can’t rationally matter to the company’s bottom line. The only reasons to value diversity would be at the levels of legal compliance with government regulations, public relations virtue-signalling, and deontological morality – not practical effectiveness. Legal, PR, and moral reasons can be good reasons for companies to do things. But corporate diversity was never justified to shareholders as a way to avoid lawsuits, PR blowback, or moral shame; it was justified as a competitive business necessity.So, if the sexes and races don’t differ at all, and if psychological interchangeability is true, then there’s no practical business case for diversity.On the other hand, if demographic diversity gives a company any competitive advantages, it must be because there are important sex differences and race differences in how human minds work and interact. For example, psychological variety must promote better decision-making within teams, projects, and divisions. Yet if minds differ across sexes and races enough to justify diversity as an instrumental business goal, then they must differ enough in some specific skills, interests, and motivations that hiring and promotion will sometimes produce unequal outcomes in some company roles. In other words, if demographic diversity yields any competitive advantages due to psychological differences between groups, then demographic equality of outcome cannot be achieved in all jobs and all levels within a company. At least, not without discriminatory practices such as affirmative action or demographic quotas.So, psychological interchangeability makes diversity meaningless. But psychological differences make equal outcomes impossible. Equality or diversity. You can’t have both.Weirdly, the same people who advocate for equality of outcome in every aspect of corporate life, also tend to advocate for diversity in every aspect of corporate life. They don’t even see the fundamentally irreconcilable assumptions behind this ‘equality and diversity’ dogma.
[“Jeb Kinnison” draws my attention to another article.] I just saw in an essay by Christina Hoff Sommers [see also video] on the AEI website that the National Science Foundation [!], as recently as 2007, sent around a questionnaire asking researchers to identify any research equipment in their lab building that was not accessible to women. In 2007. Seriously, I don’t know whether whoever came up with this “go find the crocodile milk” policy was gunning for a Nobel prize in Derpitude
or trying to create sinecures for otherwise unemployable paper-pushers, or trying to divert bureaucratic energy into a Mobius loop that would minimize interference with serious decisions.
But on a more serious note: even before I saw the “paradox” remarks, I could not help being reminded of this passage in George Orwell’s “Nineteen Eighty-Four”. The protagonist, Winston Smith, retorts to his mentor turned inquisitor:
‘But the whole universe is outside us. Look at the stars! Some of them are a million light-years away. They are out of our reach for ever.’
‘What are the stars?’ said O’Brien indifferently. ‘They are bits of fire a few kilometres away. We could reach them if we wanted to. Or we could blot them out. The earth is the centre of the universe. The sun and the stars go round it.’
Winston made another convulsive movement. This time he did not say anything. O’Brien continued as though answering a spoken objection:
‘For certain purposes, of course, that is not true. When we navigate the ocean, or when we predict an eclipse, we often find it convenient to assume that the earth goes round the sun and that the stars are millions upon millions of kilometres away. But what of it? Do you suppose it is beyond us to produce a dual system of astronomy? The stars can be near or distant, according as we need them. Do you suppose our mathematicians are unequal to that? Have you forgotten doublethink?’
Precisely: doublethink. Thus it is possible, for example, that certain biological differences between men and women, or between ethnic groups, can be at the same time out of bounds for polite discussion, yet entirely taken for granted in a medical setting. I remember when Jackie Mason in the early 1990s joked about wanting an [Ashkenazi] Jewish affirmative action quota for runners and basketball players: nowadays, that joke would probably get him fired at Google, while a sports doctor treating top athletes would just chuckle.
The root of evil here is twofold:
(1) the concept that even correct factual information might be harmful as it might encourage heresy [hmm, where have we heard that one before?];
(2) considering people as interchangeable members of collectives, rather than individuals. If one considers the abilities of a specific individual, then for the case at hand it does not matter whether the average aptitudes for X differ significantly between groups A and B, or not. (There is, in any case, much greater variability between individuals within a group than between groups.)
I would add:
(2b) overconfidence in numerical benchmarks by people without a real grasp of what they mean.
Outside the strict PC/AA context, it is the fallacy in (2b) which gives rise to such pathologies as politicians pushing for ever-higher HS graduation or college enrollment rates — because they only see “the percentage has gone up from X to Y” without seeing the underlying reality. They are much like the economic planners in the (thank G-d!) former USSR, who accepted inflated production statistics of foodstuffs and consumer goods at face value, while all those not privileged enough to shop inside the Nomenklatura bubble knew well enough that they were a sham. Likewise, those of us educated in a bygone era realize that the “much greater” HS and college graduation rates of today were achieved by the educational equivalent of puppy milling:
But simplistic numerical benchmarks are beloved of bureaucrats everywhere, as they are excellent excuses for bureaucratic meddling. As Instapundit is fond of remarking: the trouble with true gender- and ethnicity-blind fairness — and with true diversity, which must include the diversity of opinion — is that “there isn’t enough opportunity for graft in it”.
PS: apropos the calling the original author of the essay names that essentially place him outside civil society, a must-read editorial in the Boston Globe by historian Niall Ferguson. His wife, Ayaan Hirsi Ali, knows a thing or two about what real hardcore misogyny looks like, and how useless the Western liberal left is facing it. Moneygraf of the op-ed:
Mark my words, while I can still publish them with impunity: The real tyrants, when they come, will be for diversity (except of opinion) and against hate speech (except their own).
PPS: the Beautiful but Evil Space Mistress weighs in on the controversy and applies some verbal ju-jitsu.
P^3S: heh (via an Instapundit comment thread):
P^4S: Welcome Instapundit readers!
P^5S: Megan McArdle weighs in (via Instapundit) and reminisces about her own early years in tech.
Thinking back to those women I knew in IT, I can’t imagine any of them would have spent a weekend building a [then bleeding-edge tech, Ed.] fiber-channel network in her basement.
I’m not saying such women don’t exist; I know they do. I’m just saying that if they exist in equal numbers to the men, it’s odd that I met so very many men like that, and not even one woman like that, in a job where all the women around me were obviously pretty comfortable with computers. We can’t blame it on residual sexism that prevented women from ever getting into the field; the number of women working with computers has actually gone down over time. And I find it hard to blame it on current sexism. No one told that guy to go home and build a fiber-channel network in his basement; no one told me I couldn’t. It’s just that I would never in a million years have chosen to waste a weekend that way.
The higher you get up the ladder, the more important those preferences become. Anyone of reasonable intelligence can be coached to sit at a help desk and talk users through basic problems. Most smart people can be taught to build a basic workstation and hook it up to a server. But the more complicated the problems get, the more knowledge and skill they require, and the people who acquire that sort of expertise are the ones who are most passionately interested in those sorts of problems. A company like Google, which turns down many more applicants than it hires, is going to select heavily for that sort of passion. If more men have it than women, the workforce will be mostly men.
She explains how she then moved into a field — policy journalism — that is also heavily male, but that she found she could get as passionate about as her former colleagues about [then] bleeding-edge technology. Passionate enough, in fact, that she did it for free for five years (under the blog name “Jane Galt”) until she was hired by a major national magazine on the strength of her portfolio. Passion combined with talent can move mountains—or, if you pardon the metaphor, shatter glass ceilings.
P^6S: in the libertarian magazine Reason, David Harsanyi: By firing the Google memo author, the company confirms his thesis and “The vast majority of the histrionic reactions on social media and elsewhere have misrepresented not only what the memo says but also its purpose.” In the same magazine, Nick Gillespie adds that “The Google memo exposes a libertarian blindspot when it comes to power: it is not just the state that wields power and squelches good-faith debate”.
P^7S: now this is Muggeridge’s Law in action. (Hat tip: Marina F.) I was certain this was satire when I first saw it…