The classic Beatles song, “A Hard Day’s Night”, opens with a complex ringing chord that has had songbooks (and musicians) arguing among themselves for decades. Complicating the answer is that even Paul McCartney can’t exactly remember what was done.
Full disclosure: I relate to the Beatles much the way I relate to Mozart: I recognize their musical genius but much of their most popular music does not ‘move’ me either intellectually or emotionally. But I love a good musical puzzle as much as can be.
In principle, given modern computer technology, the problem of transcribing a piece of music should be simple: digitize the audio, carry out a Fourier analysis, and convert the resulting frequencies to note names. Right?
Well… Feed in unaccompanied flute and this will work fine. (As anybody who’s owned an analog synth knows, a triangle wave is a pretty decent starting point for a flute sounds — and while a triangle does have some harmonics, the fundamental is very strong and there are only odd harmonics so you can tell apart the fundamentals pretty easily from the rest in the Fourier spectrum.) Feed in a Hammond organ with just a single drawbar open: ditto. Feed in a more complex sound but with restricted harmony (e.g., a violin playing only single notes), no problem. Feed in a complex chord played by multiple instruments on top of each other, and things get hairier. Have some of the multiple instruments not be quite in tune, or let some be in equal temperament and others in just intonation, and things gets even worse.
An applied mathematician at Dalhousie University did a Fourier analysis on the opening chord some time ago and turned that into a paper. Does this sound like an academic with too much time on his hands, “partially supported by a grant from the Natural Sciences and Engineering Research Council of Canada,” no less? Well, to me it sounds like a good “torture test” for the robustness of a musical transcription code. And where it comes to science popularization, this definitely hits the spot with the musically minded: only yesterday I saw another popular article about the now a decade old analysis being linked on Instapundit.
Just retaining all frequencies with relative amplitudes above 0.02 still gave him 48 frequencies, from which he squeezed a solution that looks good in theory but just doesn’t sound “quite right”.
A musical transcription site run by somebody with the delightful pseudonym “Waynus of Uranus” points out a fly in the ointment that people who grew up with digital recording wouldn’t even have thought of. Back in the day, loud bass tones meant pushing against the limitations of vinyl singles and lo-fi audio equipment alike, so the deep end of the bass (about 80 Hz and lower) was routinely rolled off with an equalizer or a highpass filter during mixing or mastering. What this means, for example: if Paul were to strike an open D string on his bass guitar (or an A string at the fifth fret) his fundamental would be below the filter cutoff, and the Fourier spectrum would instead have the second harmonic much stronger — leading to claims like “Paul played a D3 and a soft D2 at the same time”. I know bass players like Geddy Lee or Rush or Steve Harris of Iron Maiden play lots of double-stops, but this really is a progressive rock or metal thing to do, not a pop thing.
Applied mathematician Kevin Houston takes it from there and digs further in a very geekish way. While the original record was mono, it turns out there is a stereo mix made for the movie—and in the early days of stereo, it was not unusual for recording engineers to just put some instruments all left and others all right, with the vocal in the center. (This is, pretty much, how I used to jam along with Deep Purple records: Jon Lord’s organ and Ritchie Blackmore’s guitar were usually at opposite end of the stereo image, so you could single out their parts by listening to one stereo channel at a time.) In the stereo
In the stereo mix of AHDN, Paul (bass) and George (12-string guitar) are off to one side, and John (acoustic guitar) off to the other, together with producer George Martin on piano. Better still: after subtracting the left channel from the right (i.e., “phase-inverting”), it becomes clear that the acoustic is playing an Fadd9 chord. (That means: an F major chord with an added ninth, a.k.a. a “Steely Dan chord“. It differs from a major ninth chord F9 in that the seventh is omitted.)
To cut a very long story short (some mathematicians can get quite verbose ;)), this is the solution (which relies on a good dose of Occam’s razor/the Law of Parsimony as well):
Paul just plays a low D2, but because of EQing off the deep end, the D3 overtone/second harmonic comes through louder than the fundamental, hence the acoustic illusion that the bass note played is D3
John plays F2 A2 F3 A3 C4 G4 (in standard tuning, frets 1-0-3-2-1-3)
George plays the same chord, but on a 12-string in standard tuning—where the bottom four “courses” have the second string one octave higher. Hence aside from the slight tuning discrepancy with John, he adds F4 A4 as new pitches
Finally, George Martin on the piano, with the sustain pedal down, plays D2 G2 D3 G3 C4, which one could call a Gsus4/D chord. Sympathetic resonance from the undamped piano strings adds the wash of low-level extra pitches that befuddles the Fourier analysis.
Not only does this not require attributing instrumental acrobatics to the Beatles that are out of character for them, but actually playing those notes on the respective instruments does produce a sound quite like the record. (Listen at 7:17 in the video below.)
Kevin and his collaborators could not readily find an electric 12-string, so they simulated that by layering two six-string electric chords: once fretted 1-0-3-2-1-3, the second time 13-12-15-14-1-2 with an extra hand. “Fake Nashville Tuning“, if you like.)
If this isn’t the solution, it sounds much closer than anything else I’ve heard. Enjoy the above video!
Thought must never submit, neither to a dogma, nor to a party, nor to a passion, nor to an interest, nor to a preconceived idea, nor to anything whatsoever but the facts themselves—since for thought, surrendering means ceasing to exist.
[La pensée ne doit jamais se soumettre, ni à un dogme, ni à un parti, ni à une passion, ni à un interêt, ni à une idée préconcue, ni à quoique ce soit, si ce n’est aux faits eux-mêmes, parce que pour elle, se soumettre, ce serait cesser d’être.]
Storytelling is woven into human DNA. Even the discovery of DNA’s shape is enrobed in a thrilling tale of deceit and betrayal – with a sexist twist, of course. We tell our stories every single day. Some of us are very clearly aware of the delineations between fact and fantasy, and make our living spinning narratives others enjoy reading for the fun of it. Other people lose the boundaries between fiction and their own desires, and that’s where it starts to get, for lack of a better word, problematic.
I would argue that in order to exist in this world full of contradictions, some people must create an insulting narrative to keep them from confronting the harsh realities that surround them. Without that precious blanket (and you may also envision a thumb firmly inserted for sucking on) they might have to face truths they…
For those who have been living under a rock: Google fired an employee for having the temerity to write a memo [draft archived here][full text here via Mark Perry at AEI] questioning the “diversity” (what I call “fauxversity”) and “affirmative action” (i.e., reverse discrimination) policies of the company. Said employee had earlier filed a labor grievance and is taking legal action. Now quite interestingly, here is an article in which four actual experts discuss the science underlying the memo, and basically find it unexceptional even though they do not all agree with the author on its implications. One of them, an evolutionary psychology professor at U. of New Mexico, has the money quote:
Here, I just want to take a step back from the memo controversy, to highlight a paradox at the heart of the ‘equality and diversity’ dogma that dominates American corporate life. The memo didn’t address this paradox directly, but I think it’s implicit in the author’s critique of Google’s diversity programs. This dogma relies on two core assumptions:
The human sexes and races have exactly the same minds, with precisely identical distributions of traits, aptitudes, interests, and motivations; therefore, any inequalities of outcome in hiring and promotion must be due to systemic sexism and racism;
The human sexes and races have such radically different minds, backgrounds, perspectives, and insights, that companies must increase their demographic diversity in order to be competitive; any lack of demographic diversity must be due to short-sighted management that favors groupthink.
The obvious problem is that these two core assumptions are diametrically opposed.
Let me explain. If different groups have minds that are precisely equivalent in every respect, then those minds are functionally interchangeable, and diversity would be irrelevant to corporate competitiveness. For example, take sex differences. The usual rationale for gender diversity in corporate teams is that a balanced, 50/50 sex ratio will keep a team from being dominated by either masculine or feminine styles of thinking, feeling, and communicating. Each sex will counter-balance the other’s quirks. (That makes sense to me, by the way, and is one reason why evolutionary psychologists often value gender diversity in research teams.) But if there are no sex differences in these psychological quirks, counter-balancing would be irrelevant. A 100% female team would function exactly the same as a 50/50 team, which would function the same as a 100% male team. If men are no different from women, then the sex ratio in a team doesn’t matter at any rational business level, and there is no reason to promote gender diversity as a competitive advantage.
Likewise, if the races are no different from each other, then the racial mix of a company can’t rationally matter to the company’s bottom line. The only reasons to value diversity would be at the levels of legal compliance with government regulations, public relations virtue-signalling, and deontological morality – not practical effectiveness. Legal, PR, and moral reasons can be good reasons for companies to do things. But corporate diversity was never justified to shareholders as a way to avoid lawsuits, PR blowback, or moral shame; it was justified as a competitive business necessity.
So, if the sexes and races don’t differ at all, and if psychological interchangeability is true, then there’s no practical business case for diversity.
On the other hand, if demographic diversity gives a company any competitive advantages, it must be because there are important sex differences and race differences in how human minds work and interact. For example, psychological variety must promote better decision-making within teams, projects, and divisions. Yet if minds differ across sexes and races enough to justify diversity as an instrumental business goal, then they must differ enough in some specific skills, interests, and motivations that hiring and promotion will sometimes produce unequal outcomes in some company roles. In other words, if demographic diversity yields any competitive advantages due to psychological differences between groups, then demographic equality of outcome cannot be achieved in all jobs and all levels within a company. At least, not without discriminatory practices such as affirmative action or demographic quotas.
So, psychological interchangeability makes diversity meaningless. But psychological differences make equal outcomes impossible. Equality or diversity. You can’t have both.
Weirdly, the same people who advocate for equality of outcome in every aspect of corporate life, also tend to advocate for diversity in every aspect of corporate life. They don’t even see the fundamentally irreconcilable assumptions behind this ‘equality and diversity’ dogma.
[“Jeb Kinnison” draws my attention to another article.] I just saw in an essay by Christina Hoff Sommers [see also video] on the AEI website that the National Science Foundation [!], as recently as 2007, sent around a questionnaire asking researchers to identify any research equipment in their lab building that was not accessible to women. In 2007. Seriously, I don’t know whether whoever came up with this “go find the crocodile milk” policy was gunning for a Nobel prize in Derpitude
or trying to create sinecures for otherwise unemployable paper-pushers, or trying to divert bureaucratic energy into a Mobius loop that would minimize interference with serious decisions.
But on a more serious note: even before I saw the “paradox” remarks, I could not help being reminded of this passage in George Orwell’s “Nineteen Eighty-Four”. The protagonist, Winston Smith, retorts to his mentor turned inquisitor:
‘But the whole universe is outside us. Look at the stars! Some of them are a million light-years away. They are out of our reach for ever.’ ‘What are the stars?’ said O’Brien indifferently. ‘They are bits of fire a few kilometres away. We could reach them if we wanted to. Or we could blot them out. The earth is the centre of the universe. The sun and the stars go round it.’ Winston made another convulsive movement. This time he did not say anything. O’Brien continued as though answering a spoken objection: ‘For certain purposes, of course, that is not true. When we navigate the ocean, or when we predict an eclipse, we often find it convenient to assume that the earth goes round the sun and that the stars are millions upon millions of kilometres away. But what of it? Do you suppose it is beyond us to produce a dual system of astronomy? The stars can be near or distant, according as we need them. Do you suppose our mathematicians are unequal to that? Have you forgotten doublethink?’
Precisely: doublethink. Thus it is possible, for example, that certain biological differences between men and women, or between ethnic groups, can be at the same time out of bounds for polite discussion, yet entirely taken for granted in a medical setting. I remember when Jackie Mason in the early 1990s joked about wanting an [Ashkenazi] Jewish affirmative action quota for runners and basketball players: nowadays, that joke would probably get him fired at Google, while a sports doctor treating top athletes would just chuckle.
The root of evil here is twofold:
(1) the concept that even correct factual information might be harmful as it might encourage heresy [hmm, where have we heard that one before?];
(2) considering people as interchangeable members of collectives, rather than individuals. If one considers the abilities of a specific individual, then for the case at hand it does not matter whether the average aptitudes for X differ significantly between groups A and B, or not. (There is, in any case, much greater variability between individuals within a group than between groups.)
I would add:
(2b) overconfidence in numerical benchmarks by people without a real grasp of what they mean.
Outside the strict PC/AA context, it is the fallacy in (2b) which gives rise to such pathologies as politicians pushing for ever-higher HS graduation or college enrollment rates — because they only see “the percentage has gone up from X to Y” without seeing the underlying reality. They are much like the economic planners in the (thank G-d!) former USSR, who accepted inflated production statistics of foodstuffs and consumer goods at face value, while all those not privileged enough to shop inside the Nomenklatura bubble knew well enough that they were a sham. Likewise, those of us educated in a bygone era realize that the “much greater” HS and college graduation rates of today were achieved by the educational equivalent of puppy milling:
the HS curriculum has for most pupils been watered down to meaninglessness;
supposedly “native-born and educated” college students often are deficient in basic arithmetic and reading comprehension;
a general education at the level we used to get at an Atheneum or Gymnasium [i.e., academic-track high schools in Europe] nowadays requires either a college degree or an expensive private prep school.
But simplistic numerical benchmarks are beloved of bureaucrats everywhere, as they are excellent excuses for bureaucratic meddling. As Instapundit is fond of remarking: the trouble with true gender- and ethnicity-blind fairness — and with true diversity, which must include the diversity of opinion — is that “there isn’t enough opportunity for graft in it”.
PS: apropos the calling the original author of the essay names that essentially place him outside civil society, a must-read editorial in the Boston Globe by historian Niall Ferguson. His wife, Ayaan Hirsi Ali, knows a thing or two about what real hardcore misogyny looks like, and how useless the Western liberal left is facing it. Moneygraf of the op-ed:
Mark my words, while I can still publish them with impunity: The real tyrants, when they come, will be for diversity (except of opinion) and against hate speech (except their own).
Thinking back to those women I knew in IT, I can’t imagine any of them would have spent a weekend building a [then bleeding-edge tech, Ed.] fiber-channel network in her basement.
I’m not saying such women don’t exist; I know they do. I’m just saying that if they exist in equal numbers to the men, it’s odd that I met so very many men like that, and not even one woman like that, in a job where all the women around me were obviously pretty comfortable with computers. We can’t blame it on residual sexism that prevented women from ever getting into the field; the number of women working with computers has actually gone down over time. And I find it hard to blame it on current sexism. No one told that guy to go home and build a fiber-channel network in his basement; no one told me I couldn’t. It’s just that I would never in a million years have chosen to waste a weekend that way.
The higher you get up the ladder, the more important those preferences become. Anyone of reasonable intelligence can be coached to sit at a help desk and talk users through basic problems. Most smart people can be taught to build a basic workstation and hook it up to a server. But the more complicated the problems get, the more knowledge and skill they require, and the people who acquire that sort of expertise are the ones who are most passionately interested in those sorts of problems. A company like Google, which turns down many more applicants than it hires, is going to select heavily for that sort of passion. If more men have it than women, the workforce will be mostly men.
She explains how she then moved into a field — policy journalism — that is also heavily male, but that she found she could get as passionate about as her former colleagues about [then] bleeding-edge technology. Passionate enough, in fact, that she did it for free for five years (under the blog name “Jane Galt”) until she was hired by a major national magazine on the strength of her portfolio. Passion combined with talent can move mountains—or, if you pardon the metaphor, shatter glass ceilings.
Yesterday I stumbled onto another of these “A=432 Hz” advocacy pages: it got me thinking that “how did we get to A=440 Hz?” would be a good subject for a post. So here goes. TL;DR summary: there is neither conspiracy nor deep ‘harmony with the cosmos’: the standard came about for purely pragmatic reasons. Let me explain.
In antiquity and the Middle Ages, there were no absolute pitch standards. Sure, theoretical math about the construction of scales goes back all the way to the School of Pythagoras, but that concerns itself with relative pitch (intervals), not with an absolute reference pitch. Whichever fixed-pitch instrument was part of the ensemble would have dictated the reference pitch for the others — and since this was long before the era of mass manufacturing, those were all one-of-a-kind instruments.
The German composer and theoretician Michael Praetorius (1571-1621)
did mention that in his day, there was “chamber tuning” (Kammerton) and there was “choir tuning” (Chorton, which followed the church organ) and that those were a whole tone apart. So historical organs would give us a clue as to historical pitch, right?
Well… it was indeed so that the pipes for the lowest note of an organ “principal” stop were by convention made eight foot long. (Hence the practice of labeling organ stops, or later drawbars on a Hammond organ, by “foot”: 16’ will sound one octave below, 4’ one octave above, 5 1/3’ a fifth above,… the notes played on the keyboard.) So that would seem to impose standardization at least for church music, right?
Not quite. Whose foot are we talking? Each principality in those days had its own set of customary units. We do know that German Baroque organs that have been preserved are almost invariably sharp of modern concert pitch, typically by about a semitone, but sometimes as much as a whole tone. An even-tempered semitone, 2^(1/12)≈1.05946, up from A=440 Hz would be about 466 Hz, two semitones about A=494 Hz. A whole tone down from A=466 Hz would imply a chamber pitch somewhere around A=415 Hz, as favored today by many ‘historically informed performance’ ensembles — but this would not have been universal, and actual concert pitch may have been higher.
The first tuning fork wasn’t invented until 1711, by an English court musician (trumpeter) named John Shore. Tuning forks (or “pitchforks”, as Shore punningly called them) are small and portable, drift very little with temperature and over time, and yield a nearly pure sinusoidal sound (i.e. devoid of overtones).
One of Shore’s London customers was the great expat German composer Handel. Händel’s tuning fork has actually been preserved
and sounds at A=422.5 Hz. A number of other historical tuning forks have been preserved, e.g., those used by fortepiano (and later piano) manufacturers for initial setup and tuning. The record shows that pitch kept drifting up and up, as orchestras kept pursuing an ever brighter sound. (This is not mere psycho-suggestion, particularly for the string section: tuning string instruments higher means increasing the tension on the strings, leading to more overtones in the sound.) Two other developments took place in parallel: the opera genre became a mainstay of classical music throughout Europe, and as long-distance travel became more practical and affordable thanks to the Industrial Revolution, star opera singers would travel widely.
What this also meant, however, is that an opera diva could be traveling to a new city, and suddenly would be unable to hit the highest notes as the orchestra was tuning higher. The resulting protests led to a pushback against “pitch inflation”, and hence to efforts to arrive at a standard.
[[[sidebar: scientific tuning, a.k.a. Sauveur pitch, philosophical tuning, Verdi tuning.
The French courtier and physicist Joseph_Sauveur, who first coined the term “acoustics” for that subfield of physics, in 1713 proposed an absolute pitch standard based on the frequencies of all C’s being powers of two: middle C=256 Hz, C’=512 Hz, and so forth. In Pythagorean tuning, that implies A=256 x (27/16) = 432 Hz. [in 5-limit just intonation, that would be A=256 x (5/3) = 426.666… Hz; in 12-tone equal temperament A=256 x 2^(9/12)= 430.54 Hz.] This was considerably sharp of French Baroque practice and was hence not adopted by performers. A century and a half later, the composer Giuseppe Verdi tried to revive this proposal, at a time when orchestras routinely turned way sharp of this. In recent years, various mystical and numerological ideas have been attached to Sauveur pitch, which has led to some (usually nonclassical) musicians adopting it.]]]
The trailblazer for standardization of measurement units in Europe was, of course, France, with its 1799 adoption of the metric system, which eventually became the standard for nearly the entire industrialized world as well as of the worldwide scientific community. (In 1875, seventeen countries would sign a metric system convention, which led to the creation of the International Bureau for Weights and Measures outside Paris.) In the same vein, the French government issued a ministerial decree in 1859 that mandated a “diapason normal” (standard tuning fork) throughout France at A=435 Hz: this compromise value had been recommended by an ad hoc commission advised by the likes of the composers Halévy, Meyerbeer, Auber, Ambroise Thomas and Rossini. A number of continental European countries adopted the French standard.
In Britain, on the other hand, attempts had been made to standardize to an A=452 Hz concert pitch (almost a quarter-tone sharp of modern concert pitch). Protests by singers led to the adoption of a modified French standard: concert hall organs were tuned at A=435 at about 15 degrees centigrade: assuming a thermal expansion coefficient of about 0.1% per degree Fahrenheit (actually, closer to 0.067%) of the air column in the organ pipes, it was argued that pitch would drift up to about 439 Hz in a heated concert hall, and hence in 1896, A=439 Hz was adopted by the Royal Philharmonic as the “new philharmonic pitch”. The older standard was then being referred to as “old pitch” or “high pitch”.
Recording and broadcast technology gave a new impetus to international standardization. On June 11, 1925, the US recording industry adopted A=440 as a standard. and eventually, this revived “Stuttgart value” was agreed upon at the 1939 London meeting of the International Standards Association (the predecessor of today’s ISO). What tipped the scales for 440 Hz rather than 439 Hz was again a practical argument: the BBC’s engineers could generate a stable, invariant A=440 Hz from a 1 MHz quartz crystal oscillator through a combination of frequency division and multiplication circuits (divide by 1000, then by 25, then multiply by 11). This was an impractical approach for 439, which is a prime number. Eventually, A=440±0.5 Hz would be enshrined as ISO standard 16.
Many symphony orchestras actually tune slightly higher, A=442 or 443 Hz: a list of reference pitches for orchestras worldwide can be found here (in German). The Berlin Philharmonic, in the halcyon days of Karajan, actually tuned to A=445, to revert to the more common 443 Hz under later conductors.
In the “historically informed performance” community, A=415 Hz is commonly used for Baroque music. Why that specific number? Again, a practical compromise: more or less in the ballpark of what was (German) Baroque practice, and exactly an even-tempered semitone down from A=440 Hz. This means that modern fixed-pitch instruments can still perform in an ensemble with period instruments: all that is required is transposing the former’s part down by a half-step.
Academics, writers, musicians, video producers, publishers, and other creative professionals have all heard of the “fair use doctrine”, that under certain circumstances allows us to quote copyrighted text, images, or sounds. But what is it, in plain English?
First of all, a disclaimer: I am not a legal professional — I have merely acquired a working knowledge of the concept through my day job — and nothing I write here should be taken as legal advise.
Second, while the concept existed in common law for a long time before that, “fair use” was officially enshrined as statutory law in 1976 as 17 U.S.C. § 107
Notwithstanding the provisions of sections 17 U.S.C.§ 106 and 17 U.S.C.§ 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include:
the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
the nature of the copyrighted work;
the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
the effect of the use upon the potential market for or value of the copyrighted work.
The fact that a work is unpublished shall not itself bar a finding of fair use if such finding is made upon consideration of all the above factors.
[Emphasis mine in the above.]
This four-part test was actually first formulated by US Supreme Court Justice Joseph Story (1779-1845; “what’s in a name?”) in the ruling on Folsom v. Marsh: this case concerned somebody who had published his own two-volume condensation of a 12-volume biography of George Washington. SCOTUS decided that yes, such derivative works were within the rights of the original copyright owners (the plaintiff), and that Marsh, the defendant, had violated their copyright.
In plain English: Justice Story argued that by publishing an “all the good bits” abridgment, Marsh had greatly reduced the sales potential of the long original.
Let me illustrate the four-part test with a few concrete examples:
(1) Quoting another scholarly author’s argument in a scholarly work, with proper source attribution, and clearly marking it as a quotation rather than one’s own words: fair use, and common established practice.
typically, the works being quoted are noncommercial to begin with
typically, the quote is a very small percentage of the full paper or book
the “market value” of a scholarly paper is measured in citations, and your action will generally only increase those
(2) Quoting a phrase, quip, aphorism,… of another author in one’s novel, clearly marking it as a quote? Accepted practice. Typically, the quote is 0.00x % of the whole book, and this is actually a way of bringing tribute to the writer being name-checked.
(3) Quoting a few lines from a poem or song lyric in your fiction book? Aha, now this is another matter — because even a few lines constitute a nontrivial percentage of the original work, so this would fail the amount and substantiality test.
My editor taught me a serviceable workaround: paraphrasing the lyrics in my own words. It is the words that are subject to copyright, not the ideas conveyed in them. [Cf. the idea-expression divide in intellectual property law.]
And if the poem in question is in the public domain (e.g., Shakespeare, Tennyson,… or generally anything published before 1923) then of course no issue arises.
(3b) Quoting a picture or graph from one work in a scholarly work of yours? Well… as “masgramondou” put it, “one picture is worth a thousand words — assume the same holds for copyright purposes”. The way I think of it: the image is a complete unit unto itself, and it would be more akin to quoting an entire chapter or section of a written work (or an entire verse or chorus of a song lyric).
And so, where for a textual quotation source referencing would be adequate, one would have to apply for copyright permission to the original copyright holder. In practice this is less of an imposition than it seems: most scholarly publishers have a (semi)automated mechanism in place where one can apply online with full personal details, details of the work being quoted, details of the work it’s being quoted in (review articles are actually the most common scenario), and commercial or noncommercial character of the derived work. (The last time I needed such permission for a paper in my day job, it cost me $0 and all of two minutes.) The licensed picture is then almost invariably accompanied by a statement along the lines of “From H. Slowcoach and L. Tortuga, Journal of Chelonian Reproduction12, 345 (1967). Copyright American Association for Herpetology. Reprinted with permission.”
(3c) Recycling somebody else’s artwork in your own commercial fiction book? Unless it’s in the public domain or you licensed it form the copyright holder, you are setting yourself up for litigation or at the very least “cease and desist” letters.
Stock photos are another matter. They are “works made for hire” from a copyright point of view: once you bought them, they are yours to use. Their use in book covers can entail other issues — such as when two authors use the same stock photo — that one may wish to avoid, but they are in a different realm than copyright infringement.
Coming back to scholarly nonfiction for a moment: The scientific world in recent years has seen the emergence of licensable image libraries (e.g., Springer Images). Particularly in the life sciences, where diagrams and elaborate artistic renderings are more common than straight plots or data visualization, such image libraries have their places, and can save money compared to hiring a skilled visual artist with the appropriate background.
(4) What about music?
• In a music-centric novel, describing musical compositions in great detail — short of actually including transcribed scores — is apparently fine.
• Using audio of a well-known popular song for an audio book or a book trailer in practice means licensing. It can get tiresome enough that people might instead hire a musician to compose something “in the style of [insert popular song]” and use that instead
• most classical compositions are in the public domain, but specific audio recordings (e.g. for use in an audiobook or book trailer) need to be licensed. As part of the “open culture movement”, there are artists that make their own recordings of classical pieces available under Creative Commons licenses: these may be a good alternative. Otherwise, you know what? Go to your local conservatory and offer to pay somebody to record the track for you.
(5) Reproductions of visual works of art
• What if I, say, wanted to use a digital image of a Renoir painting as a book cover? (Assume it’s a “literary fiction” book, since that’s what cover designers tell me such use would signal.) The copyright here applies to the photograph, strange as this might seem. Museums that allow downloading of digital images of their collection typically stipulate that such images are “for personal use only”. In some cases, if photography in the museum is permitted, one can legally visit the museum in person, take a picture (usually without a flash) and use that.
Speaking of book covers: who “owns” the copyright to a book cover? They are generally produced as “works for hire” by a cover designer, and whoever commissioned the work and paid for it owns the rights to the cover (typically: the publishing house, or the author if it is an indie publication). Recycling by somebody else as artwork for commercial publication projects without licensing or permission constitutes copyright infringement.
Very recently and importantly: Concerning the special case of “thumbnails” showing up in searches or use in product links, the Ninth Circuit Court of Appeals has ruled in Perfect 10 v. Amazon that these are a highly “transformative” use and that they are to be considered fair use. The ruling gave much weight to “the public interest” [in search engines etc.]. It also held that hyperlinking to such images does not constitute “secondary copyright violation”.
(6) What about parody?
Parody (if clearly recognizable as such) is an affirmative defense: a landmark court case on the matter is Campbell vs. Acuff-Rose Music, a.k.a., the “Pretty Woman” case. It involved the rappers 2 Live Crew, fronted by Luther Campbell (stage name “Luke Skywalker”), who had recorded their own “version” of Roy Orbison’s classic song: they had kept only the iconic bass riff (which I presume they programmed into a Roland TB-303) and chanted (I would not dignify their performance with the term ‘singing’) their own lyrics over it (which focused on such features of the woman as her derrière, hair in certain places, promiscuity—you get the drift). They had in fact approached the copyright owners (Acuff-Rose) for licensing the song for a parody but been told to take a hike, then recorded their own version anyway. The court found in their favor, ruling that parody was a “transformative” use [in the legal sense of the word] rather than a merely “derivative” one.
So for example, if I were to release an album and issue an ad with a picture of my own album cover, plus one of “St. Anger” by Metallica as “this is not what you will get”, I would be over the line — a picture is a complete unit, and promotional material is clearly not scholarship or commentary. If instead I drew a parody cover of a fictional album “St. Anal” by “Banalica”, I would probably be safe — but even then I might get legal advise first to be on the safe side.
there is a simple 4-factor test for “fair use”
in general, scholarly use is much more permissive than commercial use
anything quoted should be a trivially small percentage of the whole work, and in particular should not be a self-contained unit of the whole work
use should not detract from the commercial revenue potential of the original
there are commonly accepted usages; there are abusages that are manifestly illegal; and there is a gray zone in between when one might wish to get legal counsel, or at the very least err on the safe side.
Media companies tend to be very aggressive (often to the point of seeming absurdity) in asserting their rights: for noncommercial use, a recent development has been Lenz v. Universal Music. (a.k.a. the “dancing baby case”). The plaintiff, Stephanie Lenz, had posted a YouTube video (less than 30 seconds) of her baby dancing to the Prince tune “Let’s Go Crazy”. Universal Music sent a takedown notice under the DMCA: in response, Ms. Lenz sued Universal, and the case eventually reached the Ninth Circuit Court, which held for that
[copyright holders have a] “duty to consider — in good faith and prior to sending a takedown notification—whether allegedly infringing material constitutes fair use”.
This almost creates the legal situation that exists in Israel — where “shimush hogen” (fair use) is legally a right rather than an affirmative defense, and one can actually sue a company for not permitting fair use. However, to be clear: this does not mean that use that is obviously not fair in the legal sense of the word now magically has become so.
By way of dessert, here is a musical example of a “transformative work” I rather like. The original was a Rob Dougan track called “Clubbed to death” used in The Matrix soundtrack. It ironically itself starts off with a sample from an orchestral performance of Elgar’s Enigma Variations. I remember thinking “boring, but could be a good track to jam over” when I first heard it — then a guitarist named Tom Shapira recorded this amazing improvisation over it. Enjoy!
Judith Curry, the Georgia Tech climatology professor vilified by her peers for trying to have a meaningful dialogue with CAGW skeptics, is taking early retirement from academia to focus on a startup company dealing with long-term climate forecasting. http://www.cfanclimate.net/
“The reward system that is in place for university faculty members is becoming increasingly counterproductive to actually educating students to be able to think and cope in the real world, and in expanding the frontiers of knowledge in a meaningful way[…]”
It is always sad to see the departure of any academic who is truly committed to the spirit of free inquiry. Here’s wishing her the very best in her new venture and I hope to be hearing more of her!