Federico Fellini is unquestionably one of the greatest of all filmmakers. He is on everyone’s list. He won five Oscars, was nominated for a total of 17 of them. Heck, he was nominated twice before he even made his first film (as screenwriter for two Rossellini films). He made two of the movies on my own 10-Best list.

His 1954 film, La Strada, changed my idea of what movies could be. When I was growing up, the movies I saw, mostly on TV, were filled with car chases and gunfights. Movies were an entertainment. But, in my college film series, I saw La Strada and realized, for the first time, that film could be about real things, and that they could leave me weeping. The final scene with the brutish Zampano (Anthony Quinn) on the beach, wailing for what he knew he had lost, left me drained. 

And La Strada isn’t even one of the two Fellini films on my 10-Best. 

There are only a few film directors who have words added to the dictionary defining their style, but we all know what “Fellini-esque” means: an almost surreal grotesquerie tied to a very personal sense of human psychology. There are other great filmmakers, but there is no “Scorsese-esque,” no “Renoir-esque,” although these directors, too, had a personal style. Only two directors have joined the dictionary, with “Fellini-esque” and “Bergmanesque.” The two directors couldn’t be more different, but their styles are each identifiable, even when another filmmaker uses them. 

So, Fellini’s is a distinct and individual voice. Yet, the arc of his career is also distinct, and not toward ever greater or more profound films. It is a career with an upward start, a middle as high as it gets, and then a slacking as he declined. What is interesting is that it is that the cause for both up and down is the same: Fellini being Fellini. 

Federico Domenico Marcello Fellini was born in 1920, two years before Mussolini’s rise to power, in the Adriatic city of Rimini. His father was a salesman and hoped his son would rise to be a lawyer. And although he enrolled in law school, Fellini’s biographers says that “there is no record of his ever having attended a class.” Instead, he dropped out to become a cartoonist and caricaturist, and he wrote for several satirical magazines. 

He expanded to writing gags for radio shows and managed to avoid the Italian draft during the early years of World War II. He also met his wife and muse, radio actor Giulietta Masina (they remained married from 1943 to his death in 1993). 

His work in radio brought him to the attention of Neo-Realist film pioneer Roberto Rossellini, who hired him to work on the script of Rome, Open City (1945) and later, Paisan (1946). Both efforts won him Academy Award nominations. 

In 1950, he got to direct his first film, Variety Lights, followed the next year with The White Sheik, two low-budget comedies of middling success and reputation. But then, in 1952, he got to make the first genuine Fellini movie, I Vitelloni (“The Young Bulls”, or, idiomatically, “The Layabouts”), an autobiographical Neo-Realist film about his own teen years in Rimini, which won him a fourth Oscar nomination for screenplay. (The film wasn’t released in America until after the success of La Strada and Nights of Cabiria, — both Oscar winners for Best Foreign Language Film — hence, the later nomination.)

These three early masterpieces — I Vitelloni, La Strada and Nights of Cabiria — all have their roots in Italian Neo-Realism, although with Fellini’s particular stamp of both humor and grotesquerie. There is no confusing them with films by Rossellini, De Sica or Visconti. While each of Fellini’s first great films concern themselves with social conditions, poverty and the Post-War problems, they are really more concerned with individuals. Fellini was never overtly political. 

Fellini had, by 1957, been nominated for six Academy Awards and won two. But they could not have foretold what came next. Arguably his greatest film, La Dolce Vita, was also his greatest box office success. 

The great 1960 Italian classic of the Roman “sweet life” in the postwar years shows us nine days and eight nights in the life of tabloid celebrity journalist Marcello Rubini (Marcello Mastroianni) as he negotiates personal relationships, professional crises and spiritual doldrums.

“Rarely, if ever, has a picture reflected decadence, immorality and sophistication with such depth,” Box Office magazine said when the film was released. Rather than a plot, the film is a collection of episodes as our hero recognizes the emptiness of his life, decides to do something about it, and ultimately, cannot. The final scene with Mastroianni on the beach, shrugging at the girl across the way as a sign of giving up, is one of the most heartbreaking ever shot on film. 

Fellini structured the film in a series of climactic nights each followed by a dissolving dawn. In each of the nighttime episodes, Marcello faces one of his demons — although he doesn’t recognize them as such. Each night rises to a crux, a point that might waken Marcello to the aimlessness of his life, and at each sunrise, there comes not a culmination, but a dissipation of the situation — all its air is let out.

La Dolce Vita occupies a pivotal point in the career of Fellini, between the early Neo-Realist films, such as I Vitelloni and La Strada, and his later, sometimes visionary films. In La Dolce Vita, there is a balance between the sense of external reality — Italy’s boom economy in the decade after World War II, and its forgotten underclass — and the purely subjective sense of individual psychological crisis. 

In his next film, the crisis becomes personal: Otto e Mezzo or “8½” is about a filmmaker who can’t figure out what to do next. It begins with one of Fellini’s most visionary scenes: The filmmaker (again played by Mastroianni) is stuck in his car and imagines being trapped, then floats away above the car, held only by a kite-string attached to his ankle. As an opening scene, it would be hard to match, let alone beat. Through the rest of the film, he attempts to avoid his responsibility, to his producer, to his wife, to his mistress, to his crew, to his financial backers, to his fans. He imagines committing suicide, and in the end, in one of the most enigmatic and memorable scenes ever, joins a dance to the circus music of Nino Rota. As a concluding scene, it would be hard to match, let alone beat. 

What does that scene mean? We all have our own solutions. I tend to see it as the same message that Krishna gave to Arjuna in the Bhagavad-Gita, that the end or the meaning isn’t the point. The doing is. Joining in life is the point of life. Or as writer Joseph Campbell once phrased it, “the joyful participation in the sorrows of the world.” 

Whatever you decide about the ending, it is clear that these two films, together, are among the highest points of film art, at the same time, clever, funny, moving, heartbreaking, hugely cinematic and visual, and ultimately, wise. 

There are grotesque scenes in La Dolce Vita and Otto e Mezzo, but they are just part of the mix. In some of his later films, such as Roma or Fellini Satyricon, the grotesque predominates. But at the midpoint of his career, in his two best films, he balances the real and the freakish like a saint balancing heaven and hell.

Then, Fellini discovered Carl Jung, read the psychiatrist’s autobiography, Memories, Dreams, Reflections, began visiting a psychoanalyst, experimented with LSD, and became fascinated with dreams, archetypes, spirits and the unconscious. He famously defined a movie as “a dream we dreamt with our eyes open.” Jung is a dangerous thing put in the hands of an artist with no governor on his engine. 

Fellini made Juliet of the Spirits in 1963, about a repressed housewife (Masina) entering a world of debauchery, visions, memories, and mysticism to find herself. It was Fellini’s first full-length color film, and uses what one critic called “caricatural types and dream situations to represent a psychic landscape.”

As critic Roger Ebert wrote, “The movie is generally considered to mark the beginning of Fellini’s decline.” 

And three of the next four of Fellini’s major films are given over to grotesquerie, hallucination and oneiric excess: Fellini Satyricon (1969), Fellini’s Roma (1972), and Fellini’s Casanova (1976). The fact that the director’s name is attached to these three titles should tell you something. There is nothing historical or documentary about them: They are exudations of the filmmaker’s fevered brain.

Satyricon is the best of the three films, and actually captures rather accurately the spirit of Petronius’ First Century tale of Nero’s Rome. Although Fellini invented most of the episodes, they capture the tone of the picaresque original pretty well. 

Satyricon, Roma and Casanova all prominently feature parades of caricatural grotesques, people buried under exaggerated make up and hairdos, rather like some of the more peculiar drawings of Leonardo da Vinci. 

Even if they don’t succeed as whole works of art, each is stuffed like a cannolo with brilliant imagery, unforgettable moments. It is as if he was more concerned with the moment-by-moment, than the story coherence — the way a dream moves. “Don’t tell me what I’m doing,” he said. “I don’t want to know.” 

If La Dolce Vita and were satires on modern mores, the later films pass beyond satire to a rather personal misanthropy dredged up from his unconscious. 

There was one very bright and beautiful exception, though, a final grace note to his career — the 1973 film, Amarcord, which is a comic, forgiving and joyful reminiscence of Fellini’s childhood in Rimini. In 1953, his I Vitelloni explained why the young Fellini desperately wanted to escape his provincial hometown; twenty years later, he felt the need to show what he had lost by leaving. Everything that he was bitterly satirical about in his earlier films becomes the very human qualities of his dramatis personae in Amarcord. It is a gentle, affectionate, humane account of human folly, and the easiest of all of Fellini’s films to love. 

He made a handful of films after that, but none catches fire. There was Ginger and Fred (1986) and, more dubiously, City of Women (1980) in which Fellini, as his frequent alter ego Marcello Mastroianni, attempts to deal with his fear of, and lack of understanding of — women. 

Fellini made his last film, The Voice of the Moon, in 1990, and died of a heart attack in 1993, a day after his 50th wedding anniversary, and just a few months after receiving his Oscar for lifetime achievement. 

As is so often the case, Fellini’s best and worst were manifestations of the same thing — his ability and his need to put himself into his films. As he once said, “Even if I set out to make a film about a fillet of sole, it would be about me.” It gave him the secret of breaking out of the Neo-Realist mold and find his own way, but it also let him wander off into a sometimes almost solipsistic dream world of images and obsessions. When focused, as in La Dolce Vita and , he was one of the three or four greatest filmmakers of all, and even when he was noodling in fevered Fellini-Land, still provided indelible visions and emotions. There was no one like Federico Fellini.

_____________________________

We are not in control of our memories. 

One doesn’t own one’s memories. 

One is owned by them.

—Federico Fellini

______________________________

I used to have long discussions with friend and colleague Sal Caputo, who was pop music critic for the newspaper I worked for. Sal — or Salvatore — was as Italian in ancestry as I was Norwegian. And it played out in our conversations. Sal was always intense and expressive, and sometimes prone to anger and moods. He took things personally when I didn’t — I always remembered the line from Renoir’s film, Rules of the Game: “The terrible thing about life is that everyone has their reasons.” I.e., it isn’t personal. 

Searching the Internet for Sal, I could only find a couple of mug shots

Anyway, we once had a talk about movies and our varying takes on Federico Fellini and Ingmar Bergman. It wasn’t about which was the better filmmaker, but about how we internalized the films. We both appreciated both directors. But there was a difference.

The difference was almost comic. Consider the ways each director portrayed clowns. In , they play Nino Rota’s music and point the way to salvation for our lost Marcello; in Bergman’s Sawdust and Tinsel, well, you get the picture.

 I loved Fellini’s films and could appreciate both the filmmaking craft that went into them, and also the humanistic concerns of the director. Fellini rates very high on my list of movie directors. Top three or four. But somehow, I always feel as if I’m watching him from the outside. In contrast, when I see a Bergman film, it is in the blood — I know this world from the bones out. It is a world I didn’t just see, but lived. 

And for Sal, it was quite the opposite: Fellini felt to him like home, like everything he knew and felt in the fibers of his nervous system. 

In Bergman, all the action is internal; his characters are all suffering midnights of the soul. Their torture is self-imposed.

While Fellini’s people have trouble with other people, with society, with Catholicism, with Fascism, with their wives.

Bergman’s people sit silently, brooding. Fellini has hardly a photo of himself without his hands waving in the air, expostulating. 

This sense of recognition in the films, different for Sal and for me, has always made me wonder if there is something genetic about national difference. All the Squareheads I know feel Bergmanesque and all the Italians seem to feel Fellini-esque. This may just as easily have grown out of cultural familiarity as from DNA, and I’m not sure its origin makes a difference. 

I’m cheating a little with these images — not all German painting is so dour, or Italian so extravagant — but only to make a point. But there are national and regional styles, psychologies, approaches and techniques that show up across the arts. German painting is instantly told apart from Italian painting. French music from Viennese. Russians have their novels; Italians their opera; Iberians their fado and zarzuealas.

You can hear six bars of Elgar, Holst or Vaughn Williams and you know they are English. 

This has been recognized for centuries. In Baroque music, national styles were standard descriptions, as Bach’s French Suites or his Italian Concerto, or his Overture in the French Style. And you could never confuse Telemann’s music for Vivaldi’s or either for Couperin’s. 

(In deliberately oversimplified terms, German music emphasizes harmony and counterpoint; Italian music emphasizes melody and singing; French music emphasizes timbre and ornament.) 

And there does seem to be a generalized North-South axis. If you compare the Gothic cathedrals of northern France with those of Italy, you see a spare austere style, even with all the statuary, and in Italy or Spain, a kind of Plateresque extravagance. 

In the European south, expression seems to be extrovert and unrepressed; in the north, introvert and brooded over. It would be wrong to say that Italians are more emotional than Scandinavians. But in the north, the emotions are directed at themselves, whereas in the south, they are almost theatrical. 

So far, I’m using European examples, but this national or folk identity is global. Chinese art is instantly identifiable. And except for those examples of conscious imitation, Japanese art is very different. Hindu sculpture on the Indian subcontinent is easily told from Buddhist sculpture in Southeast Asia. 

Nor, in Africa, could you confuse a Benin bronze with a Fang mask or a Kota reliquary figure. 

These differences are not merely stylistic, but grow from very different world views and historical experience. There is a world of difference between the Tlingit of the rainy Northwest Coast of North America and the Navajo of the desert Southwest. 

Many years ago, I was first made aware of this kind of difference when I moved from New Jersey to North Carolina and discovered a culture radically alien to the one I was brought up in. It was agrarian rather than suburban; it held a tremendous grudge from the previous century that had not made a twinkle of a dent in my Northern psyche. It had a sense of history tied to the land, whereas I grew up in a world of second- and third-generation immigrants. These kinds of cultural differences make their way into the art, whether it is the difference between Hemingway and Faulkner, or between Fellini and Bergman.

It may be only a metaphorical expression, but it is profound: It is in the blood. 

I got a call from Stuart last night. We don’t see each other in person as much as we used to, partly because of the virus, but mostly because we are old and long drives or flights are really hard on the knees. 

“I read your piece on threes,” he said (link here), “and I had a realization. In the past, you’ve written a lot about how we are all really two people — the public person, who is just one of seven billion others and of no real significance in the big scheme of things; and the interior person, who is the hero of our own story, and therefore central in existence.”

“Yes,” I said.

“And so, that is one more binary system, like hot and cold, or tall and short, or inside and outside. Our brains seem to like to divide things into pairs of opposites. Even though, as you say, hot and cold or tall and short  are really just the same thing, relative to each other.”

“Yes, that old, who is the shortest giant or the tallest dwarf. The sunspot is a cold spot on the sun, but it is still thousands of degrees Fahrenheit.”

“I think what you said was there’s the burning end of a cigar, and the cold end, but there’s really only one cigar.”

“Yes,” I acknowledged that I once said something of the sort — my gloss on the Tao. 

“But, I realized the experience of being alive can equally be seen as  made up of three parts,” Stuart said. “The experience, I say — the way we experience our lives.

“In the old days,” he continued, “we would call those three things ‘Man,’ ‘Nature,’ and ‘Soul,’ but those terms are freighted with religion and gender bias. I don’t like them. So, instead, I call them ‘humankind,’ ‘the universe,’ and  ‘the psyche.’ These three elements encompass our experience.”

“And ‘Nature’ conjures up too much flowers and trees and birds and bees,” I said. “It’s a term too cuddly for what you mean here, right?”

“Absolutely. I mean something closer to what Werner Herzog says about nature — the indifferent violence and coldness of the cosmos. 

“Let me take them one at a time,” Stuart said. “What I’m labeling as ‘Humankind’ is the societal and political mix, the way we fit into the ordering of the welter of human population. It includes such things as relationships — father and son, husband and wife, pastor and congregation, lord and serf, American and foreigner, really all of them you can name. It is what is between people. 

“This is essentially the same as your ‘just one of seven billion’ and is the public part of our existence. We are taxpayers, we are Catholics (or atheists), we are Tarheels or Mainers, we are children or senior citizens, Tory or Labor — “

“You’re getting kind of binary on me,” I said. 

“Sorry, I didn’t mean to. But these are all those interpersonal roles we have to play out daily and throughout our lives. 

“And none of any of this does the universe care a fig about. The universe is vast and operates on its own schedule and according to its own rules, none of which consider our human needs or desires. It is the universe that throws the dice as to whether we are born male or female; it is the universe that makes us work in the day and sleep at night; the universe that ends our life when it will. We think we have so much control, but in reality, we are able to nudge the universe only infinitesimally this way or that. In the last degree, the universe will do whatever it does. We don’t count.”

“It is the universe that took my Carole away from me five years ago. I had no choice.”

“Exactly,” Stuart said. “It’s something we just have to live with. The universe is an essential part of our experience of being alive and we accommodate to it. It makes no accommodation to us. 

“Then, there is the inner life we lead, as important as the other two, perhaps more important. It is not only the sense of ourselves as ourselves, but also, all the unconscious trash that we have to deal with that’s buried in the braincase, like superstition, hard-wired evolutionary neuronal structuring, the forgotten traumas of childhood that govern our choices in ways we’ll never know about, even that drive to see the world in patterns, patterns which may or may not actually be there.”

“Like constellations in the night sky,” I said. 

“Our brains force us to find patterns, it’s all part of the psyche.”

“But isn’t there some overlap in your system?” I asked. “You say, for instance, that family relationships are part of the ‘humankind’ portion of experience, but isn’t family also an archetype, a part of the built-in wiring of the psyche?” 

“In the terms I’m speaking of, I’m considering these as two separate things: the public understanding of family as a civic unit on one side; and the archetype of family as a mythic unit on the other. They may share a name, but they are very different things. There are quite a few examples of ideas that are seen differently through one of these three different lenses. There are even those who believe ‘family’ is a universal truth, although we know historically, families are constituted differently at different times in different cultures. 

“I expect you could look at most things through one of these different lenses and find quite different results. Even the ‘individual’ has a political significance that is different from its psychic significance. To say nothing of its insignificance in the wider universe.”

“But there is a significance to the universal individual,” I said. “It is the ancient problem of the one and the many. The universe may be infinite, but it is still made up of individual parts, be they people, planets, muons or quarks. Each may be observed separated from the matrix.” 

“I’m seeing it all through the psychic lens,” Stuart said. “And not through the objective lens of science. I’m talking about our experience of being alive. And looking up at the starry night sky can be understood through each of these lenses. As a societal matter, you are an astronomer in your social role, or you are a dreamer wasting time. Through the universal lens, you are an utterly insignificant speck of organic dust …”

“Or, you are the universe looking at itself.”

“Perhaps. But through the psychic lens, you are the center of the universe, and it all revolves around you, certainly out of your reach, but the psychic center of the universe is yourself — each of us his or her own center.”

“And you are knocked out of your ego-centered reverie, when you get a jury summons, throwing you back into the social web,” I said. “Or getting a traffic ticket, or punching in at work.”

“And getting knocked from that reverie when the universe sends you the message that arthritis is chaining up your knuckles, or that your once-new car is rusting out in its undercarriage.

“None of these three lenses is sufficient. We need them all out in our full selfhood, but they are each there, nonetheless, and can be teased out and thought about separately. I think a healthy personality keeps them all in balance. A juggling act.”

We talked about many other things, we usually do. It went on for about an hour. But this was the gist of the phone call, what I thought might be interesting to share. I miss Stuart and Genevieve in person. We had such good times. Isolation is not good. A curse on the universe for making us get old and for giving us viruses. 

In 2003, idiosyncratic filmmaker Guy Maddin released his most popular film (these things are relative), starring Isabella Rossellini and titled The Saddest Music in the World, about a contest held in Winnipeg, Canada, to find the most depressing music in the world. Each country sent its representative to win the $25,000 prize put up by beer baroness Helen Port-Huntley (played by Rossellini with artificial glass legs filled with beer). The middle portion of the film features performances by many of the contestants. But the film misses the serious winner in its hijinx of weirdness. 

Because the truly saddest music in the world is, hands down, Edward Elgar’s Cello Concerto, music that can only rip your heart out and leave you prostrate with Weltschmerz. For it is not simple personal grief that Elgar wrote into the music, but the sense of the core sadness of life and the failure, of the world he knew, to survive the First World War.

Elgar was born in 1857, a decade before the deaths of Rossini and Berlioz, and became widely regarded as the first great English composer since Henry Purcell (1659-1695). He lived to see the rise of Adolf Hitler in Germany — a long life full of incident and occasion. 

But until the age of about 40, he was primarily a choral composer and a maker of musical trifles — salon music such as the ever-popular Salut d’Amour that is still an occasional encore piece. Still, that music gave him the reputation as the most important composer in England, second only to  Sir Arthur Sullivan. He became an itinerant musician, sometimes teacher, and, at 29, married his student, Caroline Alice Roberts, whom he remained married to for 31 years. Because Elgar was Roman Catholic and from a working-class background, Alice’s upper-class parents disapproved and disinherited her. But their marriage was successful and productive. After her death in 1920, Elgar wrote no more music of significance. He died in 1934. (Despite his Catholicism, he told his doctor at the end that he had no belief in an afterlife. “I believe there is nothing but complete oblivion.”) 

Elgar always wanted to write more significant music and then, in 1899, at the age of 42, he made his bid for musical importance with a set of variations for orchestra. The “Enigma” Variations remain his most popular and most performed composition, a brilliant set of 14 variations, each of which was meant as a portrait of one of his friends. 

The following year, he premiered his masterpiece oratorio, The Dream of Gerontius, setting the poetry of Alfred Cardinal Newman, in a rich, late-Romantic orchestration rivaling the style of Richard Strauss. Its Catholic doctrine (a soul’s journey to Purgatory) may have prevented it becoming as popular as it might, but it is lush and gorgeous. 

In the years up to the First World War, Elgar wrote the bulk of what he is now famous for. His Introduction and Allegro for String Quartet and Orchestra (1904); First Symphony (1908); Violin Concerto (1910); Second Symphony (1911); Falstaff (1913). He was knighted in 1904 and a shower of awards and honors fell upon him after that. 

He is usually thought of as an Edwardian composer, and identified with those years of English jingoism and colonialism. And it is hard to shake that notion when you hear, once again, his Pomp and Circumstance marches. But he really was no Colonel Blimp. And the Great War defeated him, knocked him down and left him deflated. The world order he grew up in was ripped apart and left in tatters. As British Foreign Secretary Sir Edward Grey remarked in 1914, “The lamps are going out all over Europe and I do not think we will see them lit again in our lifetime.” 

All the optimism and faith in progress that marked late Victorianism and the early years of the century were gone. It can be hard for us to imagine now, after the even-worse cataclysm of WWII and gulags and the Cultural Revolution, just how monumental and devastating the Great War was. It was to those who lived through it, as if the world were ending. Twenty million deaths, the overturning of governments, the shift of world power from the Old World to the New. 

All this, Elgar felt. The concerto was composed during the summer of 1919 at Elgar’s cottage in Sussex, where during previous years he had heard the sound of the artillery at night rumbling across the Channel from France. And at the end of the war, he summed up his despair in the Cello Concerto in E-minor, his last great orchestral work. After it, he could never work up the enthusiasm to write more. 

The work is intensely beautiful, but also profoundly sad. All the sense of loss is bound up in it. And while Elgar’s music had been regularly applauded wildly on premiere, sometimes encored twice. The Cello Concerto was a failure when it was first played at Queen’s Hall, London, in 1919, partly to being under-rehearsed and badly played, but, mostly because of its somber tone. 

In fact, it languished, rarely programed until a 1965 recording by 20-year-old Jacqueline du Pré with John Barbirolli and the London Symphony — a recording that has never been out of print since its first release. Du Pré’s performance was so emotionally present and direct, so attuned to the music, that it has been the benchmark performance ever since. And it reawakened interest in the concerto, so that now almost any cellist worth his or her salt, has it in their repertoire. 

After the concerto, Elgar wrote three great pieces of chamber music: his Violin Sonata, a string quartet and a piano quintet, all written by 1920. After that, only trifles — reworkings of old music and orchestrations of the music of other composers. He found an interest in the new technology of recording and made a groundbreaking series of discs of his own music, recordings still available, now on CD. 

But the war and Alice’s death seem to have taken the drive out of him. Elgar died Feb. 23, 1934 at the age of 76 and was buried next to his wife at St Wulstan’s Roman Catholic Church in Little Malvern, in the English Midlands.

Elgar was without doubt a great composer, but, as critic David Hurwitz has said, “a great composer but not a necessary one.” Elgar himself knew he was a bit of an anachronism, a late-Romantic composer in a century headed toward Modernism. By the time of his Cello Concerto, Stravinsky had already written his Sacre du Printemps and Schoenberg his Pierrot Lunaire. Some of the loss and longing of Elgar’s concerto is surely his sense of being adrift in the Luft von anderem Planeten — the air from another planet, that Schoenberg announced. 

The history of music would not have been any different if Elgar had never written — something that cannot be said of Stravinsky or Schoenberg. Yet, you cannot listen to Elgar’s best music — the two concertos, Gerontius or the “Enigma” Variations — and not sense in him a power and emotional sweep that lifts him to the first rank. In that sense, his music is indeed necessary. 

__________________________________

“There is music in the air, music all around us, the world is full of it and you simply take as much as you require.”

—Edward Elgar, 1896

__________________________________

There are certain pieces of music that everyone knows, whether they know it or not. They are simply in the air. They are heard not just in concert halls, but in film, TV commercials, pop songs — and at every high school graduation ceremony in the English-speaking world. Edward Elgar’s Pomp and Circumstance March No. 1 is the graduation music — or at least the “trio” portion of it. 

It is the portion of the march that was re-written, by Elgar himself, as the popular British patriotic song, Land of Hope and Glory, now sung by the home crowd with soccer victories, and as the audience stands and sings along at the final Proms concert of the season in London. And as the new high school graduates line up to receive their diplomas in school auditoriums everywhere. 

The practice began at Yale University in 1905 when Elgar was awarded an honorary degree for which the composer traveled to the U.S. It was decided to surprise him by playing his own music for the event, and they played Pomp and Circumstance. The practice caught on elsewhere, and is now ubiquitous as “The Graduation March.” 

And so, generations of Americans know the music well, without knowing, necessarily, what the music is they are hearing. It’s just “the Graduation March” of unknown provenance, as if it had just always existed. 

There are other such tunes, well known but genericized. There is Wagner’s Wedding March from Lohengrin — “Here comes the bride, all dressed in white…” — probably not recognized as from an opera. Or Mendelssohn’s Wedding March from his incidental music to A Midsummer Night’s Dream. Mendelssohn has ceased to own it: It is truly public domain. 

For many Americans, our first exposure to classical music came from Warner Brothers cartoons, and Bugs Bunny conducting Liszt’s Hungarian Rhapsody No. 2 in a parody of Leopold Stokowski. Or Elmer Fudd singing “Kill the Wabbit. Kill the Wabbit” to the tune of Wagner’s Ride of the Valkyries

That Hungarian Rhapsody got heavy use from animated films. Not only Bugs conducting, but Bugs playing the piano, and others, from Tom and Jerry to Woody the Woodpecker also took their turns with Liszt. 

Or maybe watching the Lone Ranger on TV and hearing the William Tell Overture. Or television reruns of the old Buster Crabbe serials of Flash Gordon, with Liszt’s Les Preludes as its persistent soundtrack. 

Some scores have been used scores of times in movies. Samuel Barber’s heartbreaking Adagio for Strings has shown up in at least 32 films and TV shows, including: The Elephant Man (1980); El Norte (1983); Platoon (1986); Lorenzo’s Oil (1991); The Scarlet Letter (1995); Amelie (2001); S1mOne (2002); and three episodes of The Simpsons

Popular songs have stolen classical tunes. The Minuet in G from Bach’s “Anna Magdelena Bach Notebook” became The Lover’s Concerto, recorded in 1965 by the Detroit girl group, The Toys. The Big Tune from Rachmaninoff’s Second Piano Concerto became Full Moon and Empty Arms, sung by Frank Sinatra in 1945. Tony Martin turned Tchaikovsky’s First Piano Concerto into Tonight We Love. I’m Always Chasing Rainbows was originally Chopin. The Negro Spiritual Goin’ Home was actually taken from Antonin Dvorak’s “New World” Symphony. Stranger in Paradise is one of Borodin’s Polovetsian Dances, and so is Baubles, Bangles and Beads

The list goes on: Tin Pan Alley was full of burglars. Hot Diggity, Dog Ziggity was based on Chabrier’s España; Catch a Falling Star (and Put it in Your Pocket) was the Academic Festival Overture by Brahms; Love of My Life, by Dave Matthews and Carlos Santana is from Brahms’ Third Symphony. You can find hundreds of these “steals” on Wikipedia.  

I asked my brother, Craig, for any examples he might think of, and he sent back this barrage:

“So, what’s Classical? It’s probably a close cousin to porn — I know it when I hear it.

“There are Classical music quotes all over TV and movies, and my Classical education was jump-started by Bugs Bunny and his friends, and a little later by Fantasia and Silly Symphonies. A really surprising amount of music got introduced to me in cartoons.

“But that is a different thing from pieces of music being a part of our lives, like ‘Here comes the bride,’ and ‘Pomp and circumcision’ at every graduation ever. WW2’s theme song was Beethoven’s 5th. The military loves them some Sousa marches. Everybody knows the more popular Ave Maria. Everyone knows some snippet of Figaro. In fact there are a ton of little pieces of operas than we are all a little familiar with, even if we can’t name the opera or composer. If we played “Name that Carmen” most people would say, oh, yeah, I know that, but couldn’t name the source. There are a whole poop load of snippets we’ve all heard without knowing where the came from. (Thanks, Bugs.)

“Toccata and Fugue, a passel of Puccini, Bolero, (Thanks, Bo Derek), The Blue Danube, the Saber Dance (Thanks Bugs?), the Ritual Fire Dance (Thanks Ed Sullivan and the plate spinners), Für Elise, Chopsticks (Thanks, Tom Hanks), the Ode to Joy, the thoroughly quotable Swan Lake, Ride of the Valkyries (Thanks Robert Duvall), Voices of Spring, O Fortuna from Carmina Burana (Thanks Madison Avenue), Funeral March of the Marionettes, (Thanks, Hitch), Adagio for Strings, The Skater’s Waltz, Minuet in G, the Olympics theme, “Moonlight” Sonata, anything from the Nutcracker, The Sorcerer’s Apprentice (Thanks, Uncle Walt). 

“This is a pretty pointless list, because it is kind of endless. And I might well be making a baseless assumption about American’s familiarity with these things, just like I am thinking that American’s familiarity with these pieces of music is inversely proportional with the number of guns they own.

“Pachelbel’s Canon in D (which is much more memorable than his Canons A through C), Flight of the Bumblebee (the theme music for The Green Hornet, which is I’m sure too obscure a reference to be very useful), the 1812 Overture (Thanks, Boston Pops and Quaker Puffed Rice), A Clockwork Orange made the point pretty directly about Classical music being embedded in the culture, with street thug Alex loving Beethoven deeply.  Strauss’ Zarathustra in 2001 (Thanks, Stanley), Flash Gordon used Liszt’s Les Preludes (which I have had playing nonstop in my head since I thought of it). Lugosi’s Dracula used Swan Lake (which always strikes me as a real cheapskate move to do).

The Lone Ranger music isn’t even a decent trivia question anymore because everyone knows the musical source.

“So I’m just throwing up blunderbuss answers to ‘what’s embedded in our culture.’ So I’m gonna stop here. I hope there’s something you can use.”

And I’m sure, you, dear reader, can think of many more.

Everyone has at least one minority taste — a love of some obscure discipline that the vast majority of the public find uninteresting or unimportant. It could be stamp collecting or motocross racing. The majority watch popular shows on TV, listen to Top-40 music and read best-sellers. But pick any individual from such an audience, and you’ll find at least one out-of-the-way obsession. Surfing, perhaps, or Civil War re-enacting. 

For those lucky or persistent enough, this may turn out to be a vocation: Universities are full of those who have turned their love of Medieval linguistics or non-Newtonian physics into a meal ticket. In fact, this is where we expect to find these eccentrics. It is their niche. 

But there are a few of us, a benighted few, whose lives are made up entirely out of the odder corners of life, who have almost no popular tastes and have not turned our weird fascinations into a job. We are the outcasts who love all those things that normal people find irrelevant, and we bury ourselves in the obscure, arcane, esoteric, hermetic or recondite. 

I cannot speak for others of our brotherhood (and sisterhood), but I’m afraid I was born that way. It was not a reaction to anything — no childhood traumas drove me away from things popular; no deprivations led me to seek fulfillment in those oddments of culture I find so absorbing. 

From as long back as I  can remember, my interests were not those of my peers. I heard classmates complain about school, having to learn things they didn’t feel they would ever need to know in life. And I admit, it is very seldom I have ever needed to calculate the area of a circle. But I loved school from first grade on. 

In the early grades, I adored diagraming sentences. I spent free moments between classes in the school library. I never found sports persuasive. I was in dire peril of losing myself in something as abstruse as lepidoptery or studying the history of bottle making. In third grade, I could tell you anything you wished to know about the Mesozoic Era — rather more than you would wish to know, really. 

I grew up just outside New York City, and spent many fine hours at the American Museum of Natural History, in its darker recesses, and at the Hayden Planetarium. 

As a teenager, when everyone else was listening to Paul Anka or Chubby Checker. I was listening to Leonard Bernstein. My Four Seasons was Antonio Vivaldi, not Frankie Valli. My make-out music was Stravinsky.  Honestly; I’m not making that up. 

I am not claiming special merit for my tastes. There is great value in the best pop music, and some of our classic authors were best-sellers in their own time. So I’m not making a case for being high-brow, but rather confessing my own weirdness, my own unfitness for human society. 

Not all my minority tastes are so high-falutin’ as Orlando di Lassus. I have in my bones more specialized knowledge of 1930s B Westerns than should block up any segment of a person’s long-term memory bank. Do you know the difference between Ken Maynard and his brother, Kermit? Can you name even one of the cast line-up of the ever-changing Three Mesquiteers? I can. The same for science-fiction movies from the 1950s. They are all there, clogging my brain-case. 

As I take inventory of what is boxed up in my brain-attic, I find any number of things most people don’t care about. In fact, what most people don’t care about pretty well defines who I am. 

When visiting France, I never went to the Eiffel Tower, but did drive through all of the north, visiting Gothic cathedrals. I’ve been to Chartres three times, and in Paris, Notre Dame was practically a second home. I cannot remember how many visits to it. So, yes, my tastes are not the normal tastes. 

On weekends, I watch C-Span’s “Book TV.” I search YouTube for college lectures. I have a huge collection of Great Courses DVDs. 

When it comes to movies, I love them slow and arty, preferably with subtitles. I have all of Tarkovsky on DVD, all of Almodovar, and all that are available of Robert Bresson, Eric Rohmer and Jacques Rivette. And tons of Bergman and Herzog and Renoir. I would have a bunch of Marcel Pagnol, but there isn’t a bunch. Nor is there much of Guy Maddin available, but if you ever needed bona fides as a weirdo, a confessed love of Maddin’s films is proof. 

Then, there’s classical music. If I had to lose a sense, I would ask for sight to go before hearing. I need music. Nothing else so precisely both describes and evokes the most profound human emotions. My insides swell up when I listen to the greatest music. Pop music does an excellent  job of pumping up energy and cheerleading for the happiest emotions. But classical music is needed to speak for grief, transcendence, fear, anxiety, love, power — and even more, the interplay between all these feelings. The virtue of popular music is its simplicity and directness; that of classical music is its complexity and depth. 

But even amidst the classical repertoire, I find myself drawn to the outskirts. Yes, I love my Beethoven and Brahms, but I also love my Schoenberg, my Morton Subotnick, my Colin McPhee. And even when dealing with Beethoven, I’m more likely to pull up the Grosse Fuge than the Appassionata. 

Then, there’s my reading. The authors I most often re-read are Homer and Ovid. I collect Loeb Library editions. I have seven translations of the Iliad on my shelves behind my writing desk. Five Odysseys. 

And not just Greek or Roman lit. I can’t count the number of times I’ve been over Beowulf. There’s the poetry of Rumi and Basho. I’ve read two different translations of the Indian Mahabharata. I am currently reading two very different translations of Gilgamesh, one is a line-by-line literal translation of the extant fragments, the other is a re-telling from all the varied bits of the epic that have survived, into a single version. Comparing the two gives me a better handle on Mesopotamian thought and literature. These current two now join the two earlier translations that I had previously read. 

I have often wondered why I am so out of step with my fellow beings. Any one of them might well enjoy any one of the things I’ve mentioned, but the concatenation of them defines me. You can see the wide range of things I write about in this blog. 

My late wife used to say I’m “the man who can’t have fun,” and laugh at me because I cannot bear musical theater, don’t dance, don’t listen to pop music, don’t read popular novels, and lord save me from theme parks. I shudder. But I respond that I have lots of fun with my oddments. I get tremendous pleasure from string quartets or visiting art galleries or reading multiple translations of German poetry. 

If we are what we eat, we are also what we read, see and listen to. It all goes into us and feeds us, body and soul, and fashions who we have become. For better or worse.

Cartoonist Reg Manning said you could take in all of Arizona in three great bites. Manning was the Pulitzer-Prize winning political cartoonist for The Arizona Republic from 1948 to 1971. And he wrote a book, What is Arizona Really Like? (1968), a cartoon guide to the state for newcomers (he also wrote a cartoon guide to the prickly vegetation of the state, What Kinda Cactus Izzat? Both required reading for any Arizonan.) 

Anyway, his introduction to the state bites off three large chunks: the Colorado Plateau in the north; the mountainous middle; and the desert south. It is a convenient way to swallow up the whole, and works quite well. 

I spent a third of my life in Arizona and I traveled through almost every inch of the state, either on my own or for my newspaper, and I never found a better explanation of the state (and state of mind) than Manning’s book. Admittedly, the book can be a bit corny, but its basis is sound. 

But my life has been spent in other parts of the country as well, and I came to realize that Manning’s tripartite scheme could work quite as well for almost any state. 

I now live in North Carolina, which is traditionally split in three, with the Atlantic Coastal Plain in the east; the Piedmont in the middle; and the mountains in the west. The divisions are quite distinct geologically: The escarpment of the Blue Ridge just up from the Piedmont, and the coastal plain begins with the Fall Line — a series of dams, rapids and waterfalls that long ago provided the power for industry. 

But that is hardly the only example. I grew up in New Jersey, which could easily be split into the crowded suburban north, where I grew up; the almost hillbilly south, which is actually below the Mason-Dixon Line, and where the radio is full of Country-Western stations (and who can forget the “Pine Barrens” episode of The Sopranos?); and finally, the Jersey Shore, a whole distinct universe of its own. 

And when I lived in Seattle, Washington state was clearly divided into the empty, dry east; the wet, populated coast; divided by the Cascade Mountains. Oregon was the same. It divided up politically the same way: a redneck east, a progressive west, and a mountain barrier between. 

So, I began looking at other states I knew fairly well. South Carolina and Georgia follow North Carolina with its mountains (“upcountry” in South Carolina), its Piedmont and its coast. Even Alabama does, although its coastal plain borders the Gulf of Mexico. 

Florida has its east coast; its west coast; and its panhandle — all quite distinct in culture. Michigan has its urban east, its rural west and then, hardly part of the state, the UP — Upper Peninsula. There’s lakefront Ohio, riverfront Ohio and farmland Ohio. 

Maine has a southern coast that is prosperous and filled with tourists; a northern coast (“Down East”), which is sparsely populated and mostly poor; and a great interior, which is all lakes, forests and potatoes. 

Massachusetts has its pastoral western portion, with its hills and mountains; its urban east, centered on Boston; and then there’s Cape Cod, a whole different universe. 

Heck, even Delaware, as tiny as it is, has its cities in the north, its farms on the Delmarva Peninsula and its vacationland ocean shores. 

Go smaller still. Draw a line down the center of Rhode Island and everything to the west of the line might as well be Connecticut. For the rest, Providence eats up the northern part and south of that, Rhode Island consists of islands in Narraganset Bay. 

Colorado has its Rocky Mountains and its eastern farmlands, separated by the sprawling Denver metropolitan area. 

But I don’t want to go through every state. I leave that to you. Indiana has its rural south, its urban midlands with Indianapolis, and that funky post-industrial portion that is just outside Chicago. Oy. 

Yet, as I looked at that first state, defined by Manning’s cartoons, I realized that each third of Arizona could be subdivided into its own thirds. This was getting to be madness. 

The Colorado Plateau is one-third Indian reservation, both Navajo and Hopi; one-third marking the southern edge of northern Arizona in what might be called the I-40 corridor of cities and towns from Holbrook through Flagstaff and on through Williams to Kingman; and a final third that encompasses the Grand Canyon and the remote Arizona Strip. 

The mountainous middle third of the state includes the Mogollon Rim and its mountain retreats, such as Payson; another third that is the Verde Valley; and a finishing touch the Fort Apache and San Carlos Indian reservations.

Finally, in the south and west, there is the urban spread from just north of Phoenix and continuing south through Tucson and that nowadays continues almost to Nogales and Mexico; there is the Chihuahuan Desert portions in the southeast, from Douglas through the Wilcox Playa; and in the southwest, the almost empty desert including the Tohono O’odham Indian Reservation, the Barry Goldwater bombing range, and up to the bedraggled haven of trailer parks that is Quartzsite. And no, I’m not forgetting Yuma. 

It’s a sort of “rule of thirds” applied to geography. It seems almost any bit of land can be sliced in three. Phoenix itself, as a metro area, has the East Valley, including Mesa, Scottsdale and Tempe; central Phoenix (which itself is divided into north Phoenix, the central Phoenix downtown, and the largely Hispanic south Phoenix), and the West Valley, which nowadays perhaps goes all the way up through Sun City to Surprise. 

Here in North Carolina, the Coastal Plain runs through the loblolly pines and farmland of eastern North Carolina; into the swampy lowlands of marshy lakes and tidal rivers; and on to the Outer Banks and the ocean. Another tri-partite division. The Piedmont has its 1) Research Triangle; 2) its Tri-city area of Greensboro, High Point and Winston-Salem (extending out to Statesville and Hickory); and 3) the Charlotte metro area; and the mountains include first the northern parts of the Blue Ridge, around Boone and Linville Falls; second, the Asheville area, which is a blue city in a red state; and finally the southern mountains around the Great Smokies. Thirds, thirds, thirds. 

Even Asheville, itself, comes in three varieties: East Asheville (where I live); downtown (where the drum circle is); and West Asheville (where the hippies live). West Asheville is actually south of downtown. Why? I dunno. 

You can go too far with this and I’m sure that I have. After all, it’s really rather meaningless and just a game. But you can divide California into thirds in three completely different ways. 

First, and too easily, there is Southern California; central California; and northern California. Each has its culture and its political leanings. But you can also look at it as desert California, including Death Valley and Los Angeles; mountain California, with the Sierra Nevada running like a spine down the long banana-shaped state; and populated California north of LA and in the Central Valley. 

Finally, you can split California into rural, farming California, that feeds the nation from the Central Valley; wilderness California with deserts and mountains; and entertainment California, from Hollywood and LA up through San Francisco and Skywalker Ranch (and all the wine) that keeps America preoccupied with vino et circenses

The U.S. as a whole is often looked at as the East, the Midwest and the West. The East then subdivides as the Northeast, the Middle Atlantic States and the South; The Midwest has its Rust Belt, its Corn Belt, and its Wheat Belt. The West has its Rocky Mountain States, its Pacific Coast states and, well, Texas. 

And, I suppose if you look at the world in toto, you have the West, including Europe, North America and Australia; you have Asia, or the East, which includes China, Asian Russia, and most of the Muslim nations; and the Third World, which comprises most of the rest. You can quibble over Japan as Asian or a First World Nation; and India seems caught between, with growing prosperity and growing poverty at the same time. 

These distinctions are coarse and could well be better defined and refined. And I mean nothing profound — or even very meaningful — with this little set of observations. It is an exercise in a habit of thinking. If anything, I just mean it as a counterbalance to the binary cultural prejudice of splitting everything up into pairs. There is a countervailing cultural pattern that prefers threes to twos. I wrote about this previously in a different context (link here). 

We think in patterns, and well-worn templates. But the world doesn’t often present itself in patterns. The world lacks boundary lines and the universe is a great smear of infinite variety. The mental template allows us to organize what is not, in reality, organized. The most pervasive template is the binary one, but we are entering an increasingly non-binary culture. Of course, thirds is only one alternative pattern. Perhaps best is to ignore patterns and look fresh at evidence. 

The patterns are roadmaps for thought, and we can too easily take the easy route and fit the evidence to the pattern rather than the reverse. 

The Seventeenth Century produced in Europe giants of science and philosophy and brought to birth the beginnings of Western Modernism. Their names are a pantheon of luminaries: Francis Bacon; Galileo Galilei; Thomas Hobbes; Rene Descartes; Blaise Pascal; Isaac Newton; Johannes Kepler; Gottfried Wilhelm Leibnitz; Baruch Spinoza; John Locke — names that mark the foundations of the culture we now live in. 

But during their lifetimes, their pioneering work remained the province of a rare sliver of humankind, those others of their intellectual gift who could understand and appreciate their thought. The mass of European population remained illiterate, and subject to centuries-old traditions and institutions of monarchy and religion. It wasn’t until the next century that the dam broke and the results of rationalism and empiricism made a wide splash in society, in a movement that self-congratulated itself as The Enlightenment.

And in the center of it all, in France, was Denis Diderot, one of the so-called “philosophes,” a group of writers and thinkers advocating secular thinking, free speech, the rights of humans, the progress of science and technology, and the general betterment of the human condition. 

Among the philosophes were Voltaire, Montesquieu, Abbé de Mably, Jean-Jacques Rousseau, Claude Adrien Helvétius, Jean d’Alembert, the Marquis de Condorcet, Henri de Saint-Simon, and the Comte de Buffon. They wrote about science, government, morals, the rights of women, evolution, and above all, freedom of speech and freedom from dogma. They advocated the expansion of knowledge and inquiry. And they didn’t write merely for other intellectuals, but for a wider, middle-class literate readership. It was a blizzard of books, pamphlets and magazines.

Denis Diderot was born in 1713 in the Champagne region of France, the son of a knife-maker who specialized in surgical equipment. His father expected him to follow in the family business, but Diderot first considered joining the clergy before studying for the law and by the early 1740s, had dropped out to become a professional writer, a metier that paid little and brought him into conflict with the royal censors with notorious frequency. 

He translated several works, including a medical dictionary, and in 1746, he published his Pensées Philosophiques (“Philosophical Thoughts”), which attempted to reconcile thought and feeling, along with some ideas about religion and much criticism of Christianity. 

He wrote novels, too, including  in 1748, the scandalous Les Bijoux Indiscrets (“The Indiscreet Jewels,” where “jewels” is a euphemism for vaginas), in which the sex parts of various adulterous women confess their indiscretions to a sultan who has a magic ring that can make vaginas talk. 

His most famous and lasting novel is Jacques le Fataliste et son Maître (“Jacques the Fatalist and his Master”), from 1796, a picaresque comedy in which the servant Jacques relieves the tedium of a voyage by telling his boss about various amorous adventures. 

But Diderot is remembered primarily for his work on the Encyclopédie, which he edited along with Jean le Ronde d’Alembert, and for which he wrote some 7,000 entries. It was published serially and periodically revised from 1751 to 1772 and mostly published outside of France and imported back in — censorship was strict and many books were published in the Netherlands or Switzerland to avoid French government oversight. 

In fact, Diderot spent some months in prison for his work on the Encyclopédie

There had been earlier attempts at encyclopedias, including Ephraim Chamber’s Cyclopedia, or an Universal Dictionary of Arts and Sciences published in London in 1728, and John Harris’ 1704 Lexicon Technicum: Or, A Universal English Dictionary of Arts and Sciences: Explaining Not Only the Terms of Art, But the Arts Themselves. The 18th century wallowed in long book titles. 

Among the projects of this age with an appetite for inclusiveness was Samuel Johnson’s Dictionary of the English Language of 1755. 

And the idea of binding all of human knowledge up in a single volume goes all the way back to the Naturalis Historia of Pliny the Elder in the First Century CE. 

But none of these were as compendious in intent as the French Encyclopédie, which initially ran for 28 volumes and included 71, 818 articles and 3,129 illustrations. It comprised some 20 million words over 18,000 pages of text. It was a huge best-seller, earning a profit of 2 million livres for its investors. 

In his introduction, Diderot wrote of the giant work, “The goal of an Encyclopédie is to assemble all the knowledge scattered on the surface of the earth, to demonstrate the general system to the people with whom we live, & to transmit it to the people who will come after us, so that the works of centuries past is not useless to the centuries which follow, that our descendants, by becoming more learned, may become more virtuous & happier, & that we do not die without having merited being part of the human race.”

In the article defining “encyclopedia,” Diderot wrote that his aim was “to change the way people think.” 

Their goal was no mean or paltry one, but to encompass everything known to humankind. So that, according to Diderot himself, if humankind descended once again into a Dark Age, and if just one copy of his Encyclopédie survived, civilization could be reconstructed from reading its pages. 

A good deal of its content concerned technical issues, such as shoe-making or glass blowing. But other articles addressed political and religious ideas. These are what got the Encyclopédie contributors into legal trouble. The Catholic church and the monarchy were not happy about the generally deist and republican leanings of its authors. 

And there were a lot of authors. Most of the leading philosophes wrote one or another of the entries. Louis de Jaucort wrote some 17,000 of them — about a quarter of the total. Each of the contributors wrote about his specialties. D’Alembert, who was a mathematician, wrote most of the math entries. Louis-Jean-Marie Daubenton took on natural history. Jean-Jacques Rousseau wrote about music and political theory. Voltaire on history, literature and philosophy. 

All under the editorship of Diderot and d’Alembert, and, after 1759, by Diderot alone. 

Diderot divided all of human knowledge into three parts: memory; reason, and imagination. In his Preliminary Discourse to the Encyclopedia of Diderot, d’Alembert explained these as “memory, which corresponds with History; reflection or reason, which is the basis of Philosophy; and imagination, or imitation of Nature, which produces Fine Arts. From these divisions spring smaller subdivisions such as physics, poetry, music and many others.” 

In fact, d’Alembert asserts, all of human knowledge is really just one big thing: a unified “tree of knowledge,” which if we could grasp, would explain everything with a single simple principle, which rather prefigures the unified field theory of modern physics. 

It would be hard to overemphasize the influence of the Encyclopédie in the 18th century and in the political changes of France up to and through the Revolution. The Encyclopédie disparaged superstition, of which they counted religion as an example, and it saw the purpose of government to be the welfare of its people and the authority of government to be derived from the will of its citizens. The king existed, they said, for the benefit of the people, and not the people for the benefit of the monarchy. 

It’s no wonder, then, that the church and the aristocracy tried to suppress parts of the Encyclopédie, and that many of its authors spent time in prison. 

Its successor, the Enyclopedia Britannica, wrote of Diderot’s labors, “No encyclopedia perhaps has been of such political importance, or has occupied so conspicuous a place in the civil and literary history of its century.” 

Beyond the Encyclopédie, Diderot continued as a freelance writer, as an art and theater critic, a playwright, novelist, political tract writer and freethinker. 

But despite his fame and productivity, Diderot never made much money from his work, and when Russian empress — and groupie to the philosophes — Catherine the Great, heard of his poverty, she offered to buy his extensive library, paying him an enormous sum for the books and as salary for his employment as librarian to his own collection. 

In 1773, Diderot traveled to St. Petersburg to meet Catherine. Over the next five months, they talked almost daily, as Diderot wrote, “almost man-to-man,” rather than monarch to subject. 

Catherine paid for his trip in addition to his annuity and in 1784, when Diderot was in declining health, Catherine arranged for him to move into a luxurious suite in the rue de Richelieu, one of the most fashionable streets in Paris. He died there a year later at the age of 70. 

Despite her admiration for Diderot and his revolutionary ideas, Catherine ignored all of them in her own autocratic rule of Russia. But Diderot and his Encyclopédie pointed the way to the Declaration of the Rights of Man and of the Citizen, the triumph of democracies, and even the American Declaration of Independence and Constitution. 

According to philosopher Auguste Comte, Diderot was the foremost intellectual in an exciting age, and according to Goethe, “Diderot is Diderot, a unique individual; whoever carps at him and his affairs is a philistine.” 

 —————————————————————-

“On doit exiger de moi que je cherche la vérité, mais non que je la trouve.”

“I can be expected to look for truth but not that I should find it.”

—Denis Diderot, Pensées Philosophiques (1746)

 ————————————————————

When I was a young man, more than a half century ago, I had a simple ambition: To know everything. I suppose I was thinking mainly of facts; I would have no inkling of anything that couldn’t be named and catalogued. I wanted to read everything, name every bird and wildflower, every tree and understand every philosopher. I read all the poetry I could find, listened to all the symphonies and quartets and attempted to ingest all of astronomy and physics and history. Yes, I was an idiot. 

As early as the second grade, I believed that when I got to college, then I would finally have access to everything. And so, when I went to Guilford College in North Carolina, I couldn’t wait and in my first semester, I signed up for 24 credit hours of courses. I had to get permission from the dean for the extra hours above the normal 18 that for most students was a full course load. I grabbed Ancient Greek language, astronomy, Shakespeare, the history of India, esthetics, music theory — and over four years, everything I could think of. 

To my surprise and disappointment, not all of it was as edifying as I had hoped and not all the professors as brilliant as I had imagined. Still, it was a lot better than high school. 

What I sought was knowledge that was encyclopedic, encompassing all there was to know. Yes, I know now that all this was silly. I was young, naive and idealistic. Always a poor combination. 

The match that ignited this quest was probably the first actual encyclopedia I had. When I was in grade school, our next door neighbor, who worked for Doubleday publishing in New York, gave us boxes of books, mostly old, and that included a full set of Compton’s Pictured Encyclopedia — probably the set our neighbor had when he was a boy. It dated from the 1930s and had great imagination-burning articles on such things as “The Great War,” dirigibles, and steam locomotives. The endpapers of each volume included illustrations of such things as elevated roads, autogyros, and speedboats. 

It didn’t matter that much in the set was out of date. It was a multi-volume key to unlock a whole world.

Later, our parents bought a more up-to-date Funk and Wagnalls encyclopedia, purchasing a single volume each week through a promotional deal at the A&P supermarket. It was a much cheaper production, on cheaper paper, with blank endpapers, but at least it included the Second World War. 

All through my childhood and adolescence, I would grab a volume and randomly read entries. I would pore over its pages, reading it all for fun. When I had to write a term paper in high school, I did my research in our Funk and Wagnalls. 

I can’t say I read every article in the whole encyclopedia, but I may have come close. 

And as I grew, my ambition grew: I wanted, more than anything, to own the two great compendia of all human knowledge: The Oxford English Dictionary and the Encyclopedia Britannica. Both were well out of my price range, but I lusted, the way most boys my age lusted after Raquel Welch. 

Years later, after college, the OED was published in a two-volume compact form, with microscopic print and a magnifying glass to read it, and I managed to get a copy through signing up for a book-of-the-month club. I still have it, and I still browse through it to find random words, their histories and the curious way language changes over the years. 

The Britannica took longer. The Encyclopædia Britannica, or, A Dictionary of Arts and Sciences, compiled upon a New Plan, was published in Edinburgh first in 1768, as an answer to Diderot’s Encyclopédie. At first, it was bound in three equally sized volumes covering: Aa–Bzo; Caaba–Lythrum; and Macao–Zyglophyllum. There have been 15 editions since, but each edition was continually updated, making the Britannica a constantly evolving entity. It was briefly owned by Sears and Roebuck, and eventually migrated to the University of Chicago. Currently it is privately owned and only available digitally. They stopped printing it in 2010. 

It wasn’t until I was in my 30s, when working as a teacher in Virginia, I found an old used set of Britannicas at a giant book sale held annually in the city’s convention center. It was an 11th Edition version — still the standard as most desirable edition. I felt like Kasper Gutman finally getting his hands on the Maltese Falcon. But when I unwrapped my prize, it was, in fact, the real thing. 

It sat, in pride of place, on my bookshelves, more as trophy than anything else. And when we moved to Arizona, I had to give it up in the great divestment of worldly goods necessary to truck our lives across a continent. I hated to give it up, but had to admit, I wasn’t using it as much as I had expected. I had an entire library of other books that I could consult. 

Then, in Arizona, I came across a more recent edition of the Britannica for sale at Bookman’s, a supermarket-size used book store in Mesa, Ariz. It was the version divided into a “macropedia” and “micropedia.” I bought it to replace the earlier version I had once coveted. 

I have never warmed to this version of the encyclopedia — a smaller set with simpler, introductory articles about a wider range of subjects, and a longer set with in-depth scholarly articles about a smaller range of more commonly referenced subjects. It felt dumbed down — and worse, confusing, because you could never quite tell if you should first consult the micro- or the macro- section of the series. 

But at least, I still owned a Britannica, and felt that somehow, I possessed, if not the actual knowledge of the universe, at least access to it.

The end of Britannica was also the end of my obsession with it. With the advent of Wikipedia, I no longer needed to shuffle through pages of multiple volumes, sort through indexes, or cross-reference material. In researching a story for my job as art critic with the newspaper, I could just go online and get the birth date of Picasso or the list of art at the Armory Exhibit of 1913. Wikipedia was easier to use, and for my purposes, just as accurate as my beloved Britannica. 

And so much easier to use. I cannot now imagine being a writer without Wikipedia. If I need a date or check spellings, it is instantly available. 

And just as I spent time as an adolescent swimming through my Compton’s or Funk and Wagnalls, reading random articles for the fun of it, I now spend some portion of my time sitting in front of my computer screen hitting the “random article” button on Wikipedia to read about things I wouldn’t have known to be interested in. Lake Baikal? Yes. Phospholipidosis? It is a “lysosomal storage disorder characterized by the excess accumulation of phospholipids in tissues.” De Monarchia? A book by Dante Alighieri from 1312 about the relationship of church and state, banned by the Roman Catholic Church. I know of some politicians who might profit by reading Dante. 

It’s fun picking up random bits of information like this. But it also demonstrates why my interest in owning all the world’s knowledge in book form has evaporated. 

First, the cosmos is infinite and packing 20 volumes of an encyclopedia with information about it is really like taking a teacup to the ocean. Second, knowledge keeps changing and growing. What we thought we knew a hundred years ago has been replaced by more complete data and theory — and so knowledge is not so much a teacup as a sieve. 

Then, there is the even more knotty problem, that knowledge isn’t even the most important part of understanding. Facts are good, and I wouldn’t want to be without them, but infinitely more essential is the interrelationship between them; the complexity of human mind as it interacts with what it knows, or thinks it knows; the moiling stew that is the mix of thought and emotion; the indistinct borders of learning and genetic inheritance; the atavistic tribalism that seems to overcome any logic; the persistence of superstition, magic and religion in how we understand our Umwelt; and ultimately, the limitations of human understanding — how much more is there that we not only don’t know, but cannot know, any more than a goldfish can understand nuclear fission. 

The reality of our existence is both infinite and unstable. Trapping it in print is an impossibility. It swirls and gusts, churns and explodes. Any grasping is grasping handfuls of air. We do our best, for the nonce, and must be satisfied with what we can discern in the welter. 

I think of Samuel Johnson’s heartbreaking preface to his 1755 Dictionary, which every thoughtful person should read and lock to mind. “To have attempted much is always laudable, even when the enterprise is above the strength that undertakes it: To rest below his own aim is incident to every one whose fancy is active, and whose views are comprehensive; nor is any man satisfied with himself because he has done much, but because he can conceive little. … When I had thus enquired into the original of words, I resolved to show likewise my attention to things; to pierce deep into every science, to enquire the nature of every substance of which I inserted the name, to limit every idea by a definition strictly logical, and exhibit every production of art or nature in an accurate description, that my book might be in place of all other dictionaries whether appellative or technical. But these were the dreams of a poet doomed at last to wake a lexicographer. … I saw that one enquiry only gave occasion to another, that book referred to book, that to search was not always to find, and to find was not always to be informed; and that thus to pursue perfection, was, like the first inhabitants of Arcadia, to chase the sun, which, when they had reached the hill where he seemed to rest, was still beheld at the same distance from them.”

Amen.

I have no belief in ghosts, spirits or ouija boards and I don’t believe that the past hangs on to the present to make itself palpable. But I have several times experienced a kind of spooky resonance when visiting certain famous battlefields. 

The thought re-emerged recently while watching a French TV detective show that was set in Normandy, and seeing the panoramas of the D-Day landing beaches. I visited those beaches a few years ago and had an overwhelming rush of intense sadness. It was inevitable to imagine the thousands of soldiers rushing up the sands into hellish gunfire, to imagine a thousand ships in the now calm waters I saw on a sunny day, to feel the presence in the concrete bunkers of the German soldiers fated to die there manning their guns. 

The effect is entirely psychological, of course. If some child with no historical knowledge of the events that took place there were to walk the wide beach, he would no doubt think only of the waves and water and, perhaps, the sand castles to be formed from the sand. There is no eerie presence hanging in the salt air. The planet does not record, or for that matter, much note, the miseries humans inflict on each other, and have done for millennia. 

But for those who have a historical sense, the misery reasserts itself. Imagination brings to mind the whole of human agony. 

Perhaps I should not say that the earth does not remember. It can, in certain ways. Visiting the woods of Verdun I saw the uneven forest floor, where the shell craters have only partially been filled in. Once the trees were flattened by artillery, leaving the moonscape littered with corpses. The trees have grown back, but the craters are still discernible in the wavy forest floor. 

This sense came to me first many years ago visiting the Antietam battlefield in Maryland. There is a spot there now called Bloody Lane. Before Sept. 17, 1862, the brief dirt drive was called the Sunken Road, and it was a shortcut between two farm roads near Sharpsburg, Md. All around were cornfields rolling up and down on the hilly Appalachian landscape.

The narrow dirt road, depressed into the ground like a cattle chute, now seems more like a mass grave than a road. And it was just that in 1862, when during the battle of Antietam Creek, Confederate soldiers mowed down the advancing Federals and were in turn mowed down. The slaughter was unimaginable.

You can see it in the photographs made a few days after the battle. The soldiers, mostly Southerners, fill the sunken road like executed Polish Jews. It was so bad, as one Union private said, “You could walk from one end of Bloody Lane to the other on dead soldiers and your feet would never touch the ground.”

Even today, with the way covered with crushed blue stone, the dirt underneath seems maroon. Perhaps it is the iron in the ground that makes it so; perhaps it is the blood, still there after 160 years.

Antietam was the worst single day of the Civil War. Nearly 23,000 men were killed or wounded. They were piled like meat on the ground and left for days before enough graves could be dug for them. There were flies, there was a stench. The whole thing was a fiasco, for both sides, really.

But all these years later, as you stand in Bloody Lane, the grassy margins of the road inclining up around you and the way lined with the criss-cross of split-rail fencing, it is painful to stand in the declivity, looking up at the mound in front of you, covered in cornstalks in a mid-July day. You can see that when the Yankees came over the rise, they were already close enough to touch. There was no neutralizing distance for your rifle fire to travel, no bang-bang-you’re-dead, no time, no room for playing soldier. Your enemy was in your face and you had to tear through that face with lead, the blood splattered was both Federal and Confederate, in one red pond among the furrows. In four hours on 200-yard stretch of Bloody Lane, 5,000 men were blown apart.

It is difficult to stand in Bloody Lane and not feel that all the soldiers are still there, perhaps not as ghosts, but as a presence under your boot-sole, there, soaked into the dirt.

It is almost, as some cultures believe, as if everything that happens in a place is always happening in that place. The battle was not something that occurred before my great-grandfather was born, but a palpable electricity in the air. You can not stand there in Bloody Lane and not be moved by that presence.

A similar wave of dismay overcame me at several Civil War sites: Shiloh; Vicksburg; Fredericksburg; Cold Harbor; Petersburg; Appomattox. Always the images rise in the imagination. Something epochal and terrible happened here. 

Visiting the Little Bighorn Battlefield in Montana, there are gravestones on the slope below the so-called “Last Stand,” but you also look down into the valley where the thousands of Sioux and Cheyenne were camped. 

I’ve visited Sand Creek and Washita. And Wounded Knee. That was the most disturbing. You travel through the Pine Ridge Reservation and the landscape is hauntingly beautiful, then you pull into the massacre site and you see the hill where the four Hotchkiss guns had a clear shot down into the small ravine where the victims huddled. The sense of death and chaos is gripping. The famous image of the frozen, contorted body of Big Foot  glowers in the imagination. It feels like it is happening in a past that is still present. 

This sense of horror and disgust wells up because of the human talent for empathy. Yes, I know full well that there are no specters of the victims waiting there for me, but my immediate sense of brotherhood with them resurrects them in my psyche. I am human, so I know that those dead were just like me. I can imagine myself bowel-loosening scared seeing my comrades to either side being blown to pieces and an enemy who I’ve never met and might have been friends with races toward me with bayonet stretched in front of him, eyes wide with the same fear. 

History is an act of the imagination. The most recent may be memory, but for me to know what my father went through in France and Czechoslovakia in World War II requires my identification with him, my psyche to recognize the bonds I share with him — and with all of humanity. 

So, when visitors are shaken by visits to Auschwitz or stand on the plains of Kursk, or the shores of Gallipoli, they well may sense that history as more present than past. I have had that experience. The ghosts are in me.

Caryl Chessman was what they used to call “a nasty piece of work.” Born in 1921, by the time he was 20 he had spent time in three reform schools and three prisons, including San Quentin, Chino and Folsom. He stole his first car at 16 and by the time he was caught, for the umpteenth time, in 1947, he had robbed liquor stores, burgled homes, stolen cars and, finally, led police on a bullet-riddled 10 mile high speed car chase through Los Angeles. When he was stopped, he ran from the car into the neighborhood, chased on foot by the cops until he was tripped up and handcuffed. 

Or, as Chessman himself put it, “I am not generally regarded as a pleasant or socially minded fellow.”

He would have been just another petty criminal with a lifetime of serial prison terms but for a series of sensational crimes, which ended in that car chase and capture. 

In late 1947 and early 1948, there were a series of robberies in Los Angeles in which a man in a gray Ford coupe with a red police light would flag down drivers, point a gun in their face and rob them of what cash they had. He became known in the papers as the “Red Light Bandit.” A special police task force was created to track him down. 

On Jan. 19, the red light bandit stopped a car with a couple in it, robbed the man and then dragged the woman out of the car and forced her to perform fellatio on him. Three days later, he stopped a man and the man’s 17-year-old neighbor, robbed him and took the girl into his car, drove several miles, stopped and attempted to rape her. Failing in that, attempted anal rape, and failing in that, forced her to perform oral sex on him. He then drove her to her neighborhood and left her on the side of the road. 

The next day, police spotted the Ford coupe, implicated in the crimes, and gave chase, gunfire and all. And when they caught Chessman (and his accomplice, who was later convicted of several of the robberies he had pulled off with Chessman), the young rape victim identified Chessman as her assailant. Later, the first victim also identified him. Evidence was found in the coupe, including a red ribbon the young girl had lost in the attack. 

A three-day trial ended with Chessman convicted of 17 counts of robbery, rape, attempted rape and kidnapping. Chessman had chosen to act as his own attorney. The judge tried his best to dissuade him, but Chessman persisted, and defended himself incompetently. 

The problem that brought Chessman to national and international attention was that he was convicted, in part, under a 1933 California law, called a “baby Lindbergh law,” which offered the death penalty in cases where kidnapping was accompanied by another crime and led to “bodily harm.” Chessman’s crime ticked all the boxes. No one had been killed, though, and for many, including California’s governor Pat Brown, the death penalty in such cases was too extreme. Brown was a vocal opponent of the death penalty. 

Chessman, protesting his innocence and claiming to know who the “real” red light bandit was, filed appeal on appeal, getting nowhere. Various judges and judicial panels put stays on the execution order and Chessman was on the edge of his death eight times, always getting a last-minute stay of execution. This went on for nearly 12 years, becoming an international cause célèbre, and a rallying point for the anti-death-penalty movement. 

“A cat, I am told, has nine lives,” wrote Chessman. “If that is true, I know how a cat feels.”

He wrote three books while on death row. As he put it in one of them, The Face of Justice, “I won the dubious distinction of having existed longer under death sentence than any other condemned man in the nation’s then 179-year history. Day after day, I would go on breaking my own record.”

Chessman’s case was covered widely in the news, especially in the last year. He was featured on the covers of Time magazine and Germany’s Der Spiegel. Rallies were held worldwide. Folk singers wrote ballads. 

I was in seventh grade at the time, and remember quite well how much Chessman was in the news. It was the first time I was ever fully thoughtful about the death penalty. 

In the years Chessman waited for his date with the gas chamber, the baby Lindbergh law was repealed, although not retroactively. The anti-death penalty movement gained traction, and Gov. Brown found himself unable to commute Chessman’s sentence because California law required that the state supreme court concur to the commutation, which they refused to do. 

And so, on May 2, 1960, Chessman was led to the gas chamber at San Quentin Prison. 

______________________________

“I guess I’ll just have to practice holding my breath.”

— Caryl Chessman

______________________________

It is called “capital punishment,” because “capital” (“of the head,” derived via the Latin capitalis from caput, “head”) refers to execution by beheading, one of the older, and historically quite popular, forms of criminal execution. 

The gas chamber was originally considered an improvement on earlier methods, and thought less cruel than hanging, electrocution, beheading or being shot by a firing squad. Just how “painless” death by cyanide gas is is up for discussion. At best, it is not a pretty sight. 

The death of Caryl Chessman in San Quentin’s gas chamber on May 2, 1960, was described by several reporters. He was brought to the large metal capsule with two chairs, side by side. He was placed in one and straps attached to his arms, legs and torso. A doctor attaches a long stethoscope tube to the condemned’s chest. 

Chessman; San Quentin gas chamber; wax exhibit of Chessman at Mme. Tussaud’s

One reporter wrote, “The doctor of the prison walks up and utters the victim’s full name — you know, like “Richard Allen McVictim” — the full legal name you only hear when you are in trouble.”

“After they strap the soon-deceased into the metal chair, one of the guards, usually at 10:02 a.m. is wont to tell the victim something like, ‘Take a deep breath as soon as you smell the gas — it will make it easier for you (“how the fuck would you know” is what Barbara Graham is legended to have replied) [when she was executed in 1955].”

Continuing from another reporter, “The execution squad left the chamber and quickly closed and sealed the big airtight door. At 10:03 a.m., Warden Dickson nodded to Max Brice, the state executioner, a tall man in a dark business suit, who stood next to him. Brice moved a lever and a dozen egg-shaped Dupont cyanide pellets in a cheesecloth bag were lowered into a vat of sulphuric acid under the death chair. Almost instantly, deadly invisible fumes began to rise in the chamber. Chessman took a deep breath and held it, warding off unconsciousness for as long as possible. But the fumes must have reached him very quickly because witnesses saw his nose twitch, then he expelled the breath he was holding and breathed in. He looked over at Eleanor Black once again and smiled a sad, half-smile just before his head fell forward. Seconds later, foamy saliva began to drool from his open mouth.”

Further on, “In the chamber, Caryl Chessman’s body began to react to the death that was seizing it. He vomited up part of his breakfast; his bladder and bowels emptied inside his clothes. Then his heart stopped beating. At 10:12 the physician listening through the stethoscope advised Warden Dickson that Chessman was dead. Dickson turned to one of the execution squad officers. ‘Start the blowers.’ The officer threw a switch and a fan high above the chamber began to suck out the fumes and the stench.”

Public hanging; Ruth Snyder in the electric chair, 1929; Weegee photo of gas chamber execution

Finally, from another account, “Reporters one has interviewed who have witnessed executions say that there are screams, coughing, hacking, wild facial grimaces and drool. The murdered human loses control over his system, drooling…  The body slumps. After 8 to 10 minutes, the heart stops. The gas is sucked out of the chamber, the puke and defecation is hosed from the metal, the body is hauled away.”

Death by cyanide gas was first introduced in 1924 with the execution of Tong gang murderer Gee Jon in Carson City, Nev. The state supreme court ruled that gas was not “cruel and unusual punishment, but should be considered as “inflicting the death penalty in the most humane manner known to modern science.”

The desire to kill offenders humanely drove justice systems and penologists from the late 1700s onward. The most famous of “humane” methods was proposed by Joseph-Ignace Guillotin to the French National Assembly in 1789 in the form of decapitation “by means of a simple mechanism.” Although that mechanism had been around in various forms for centuries, it took on Guillotin’s name: a tall wooden frame holding a heavy angled blade that would drop from a height and sever head from body more cleanly and with less fuss than the older, more traditional sword or ax decapitations, which were sometimes rather gruesome and could take several whacks to get it right. 

Last public execution by guillotine in France, 1939

The guillotine — aka the “National Razor” — was last used in France for public execution in 1939, and last used at all in 1977 and outlawed in 1981. In use in the 20th century, the process could be quite efficient, as seen in a moment of Gilles Pontecorvo’s 1966 hyper-realistic film, The Battle of Algiers. A prisoner is marched to an inner courtyard of the prison where he is strapped to a vertical board which pivots down under the blade, which falls and severs the head — a process that took all of two or three seconds from start to finish. No ritual, no ceremony. If you are thinking of scenes of Louis XVI or Marie Antoinette on the scaffold making noble speeches, forget it. Pivot, slice, bounce. Quick as that. The headless body is then rolled over onto a gurney and with the head thrown in, wheeled away. 

Each new version of “humane” execution has been followed by another attempt as the previous became recognized as barbaric. And so, in the U.S., the gas chamber and electrocution, followed by lethal injection and most recently, by nitrogen asphyxiation. 

The barbarity of earlier methods is to modern moralities appalling.  Before the Age of Enlightenment and before jails and prisons became widespread in the 19th century, the primary punishments were public humiliation (such as the stocks), fines, torture, and death. These were pretty much the gamut. Death was meted out for even petty crimes. 

In Medieval England, you could be executed for: theft; cattle rustling; blasphemy; sodomy; incest; adultery; fraud; insult to the king; failure to pay taxes; extortion; kidnapping; being Roman Catholic (at one point); being Protestant (at another point) — and the list goes on. It is estimated that some 72,000 people were executed under the reign of Henry VIII alone. 

As late as 1820, in the British era of the so-called “Bloody Laws,” you could be executed for shoplifting, insurance fraud, or cutting down a cherry tree in an orchard. 

And we haven’t even mentioned the sin of being the wrong ethnicity. Millions upon millions have been murdered under official sanction for that, not only in Nazi Germany, but in the Ottoman Empire, in China, or Burma, or Rwanda. Genocide is a new word, but for an ancient practice. 

And historically, the methods could be grizzly. Torture on the wheel, for one, where you would be tied to a wheel, perhaps held up in the air, to die over a period of days from hunger and thirst, or from the broken bones and ruptured organs caused by the process. 

There was the gibbet, where the condemned was hung in a cage in public, again to die of hunger and thirst over days exposed to the elements.

One of the oldest methods was stoning, popular in the Old Testament and in modern fundamentalist Islam. It is a punishment still in use in some Islamic countries. 

 

As is beheading, the official execution method of Saudi Arabia. 

Other methods used in the past include: burning at the stake; boiling in oil or water; crushing by an elephant; fed to the lions or other animals; trampling by horses; being buried alive; crucifixion; disembowelment; dismemberment (as in being drawn and quartered); drowning; being pushed off a cliff; being flayed alive; garrotting; being walled up (immurement); being crushed under weights; impalement; having molten metal poured down the gullet; being tied in sack with wild animals and thrown into a river; poisoning; methodically slicing off body pieces until death; suffocation in ash. 

And there’s always tying the condemned to the mouth of a cannon and firing it, blowing the victim to pieces. And to prevent the spilling of blood, ancient Mongolians would execute prisoners by breaking their backs.  Then, there’s the Brazen Bull, a hollow bronze effigy of a bull wherein the condemned was encased and cooked inside as a fire was burned underneath. The Soviet method was to simply shoot the prisoner quickly in the back of the head, often without warning. And the Viking “Blood Eagle,” in which the condemned was put face down and his ribs cut from his backbone on both sides, spread open and the lungs pulled out and laid out as “wings.” 

The methods seemed to elicit the most sadistic tendencies of the human race. Death by torture was common. 

Western cultures have slowly weaned themselves of capital punishment over the past two centuries, albeit in fits and starts. It is uncommon in the developed world, but still practiced in much of the rest of the world. The U.S. remains largely squirmy at the idea, but, state-by-state, has either outlawed the death penalty or reveled in it (I’m looking at you, Texas). 

The arguments for and against continue to be made. Is it retribution; is it meant to discourage crime; or is it a hygienic process to eliminate unwanted criminal elements from society? With so many convictions being overturned by newer evidence, especially DNA evidence, can it still be justified? 

As Moses Maimonides said in the 12th century, “It is better and more satisfactory to acquit a thousand guilty persons than to put a single innocent one to death.”

San Quentin lethal injection room 

Currently, there are just under 2,500 inmates in the U.S. waiting execution. The average time between sentencing and execution is now 20 years, far exceeding Chessman’s 11 years and 10 months. Forty-two percent are White; 42 percent are Black; 14 percent are Hispanic. Just under 98 percent are male and more than two-thirds have not got even a high-school education. Since 1972, 1.6 percent have been formally exonerated and released. 

There were a lot of pleasures to working for a newspaper before the imposition of austerity that followed corporate buy-outs. The earlier parts of my career in the Features Department with The Arizona Republic in Phoenix, Ariz., came with great joys. 

Before being eaten up by Gannett, The Republic was almost a kind of loony bin of great eccentrics, not all of whom were constitutionally suited to journalism. Those days, it was fun to come to work. When Gannett took over, it imposed greater professionalism in the staff, but the paper lost a good deal of personality. Those who went through those years with me will know who I’m talking about, even without my naming names. But there was a TV writer who tried to build himself a “private sanctum” in the open office space, made out of a wall of bricks of old VHS review tapes. There was a society columnist who refused to double-check the spelling of names in his copy. A movie critic who could write a sentence as long as a city bus without ever using an actual verb. She was also famous for not wearing underwear. 

I could go on. There was the travel writer who once wrote that in Mexico City there had been a politician “assassinated next to the statue commemorating the event.” And a naive advice columnist whose world-view could make a Hallmark card seem cynical. The book editor seemed to hate the world. The history columnist was famous for tall-tales. 

And let’s not forget the copy editor who robbed a bank and tried to escape on a bicycle. 

There were quite a few solid, hardworking reporters. Not everyone was quite so out-there. But let’s just say that there was a tolerance for idiosyncrasy, without which I would never have been hired. 

The newspaper had a private park, called the “Ranch,” where employees could go for picnics and Fourth-of-July fireworks. The managing editor was best known for stopping by your desk on your birthday to offer greetings.  

What can I say? Just a few months before I was hired, the publisher of the paper resigned in disgrace when it was revealed that his fabulous military career as a Korean War pilot (he was often photographed in uniform with his medals) was, in fact, fabulous. It was a fable he made up. 

And so, this was an environment in which I could thrive. And for 25 years, I did, even through corporate de-flavorization and a raft of changing publishers, executive editors, editors-in-chief and various industry hot-shots brought in to spiffy up the joint. I was providentially lucky in always having an excellent editor immediately in charge of me, who nurtured me and helped my copy whenever it needed it. 

(It has been my experience that in almost any institution, the higher in management you climb, the less in touch you are with the actual process of your business. The mid-level people keep things functioning, while upper management keeps coming up with “great ideas” that only bollix things up. Very like the difference between sergeants and colonels.)

The staff I first worked with, with all their wonderful weirdnesses, slowly left the business, replaced with better-trained, but less colorful staffers, still interesting, still unusual by civilian standards, but not certifiable. The paper became better and more professional. And then, it became corporate. When The Republic, and the afternoon Phoenix Gazette, were family-owned by the Pulliams, we heard often of our “responsibility to our readers.” When Gannett bought the paper out, we heard instead of our “responsibility to our shareholders.” Everything changed. 

And this was before the internet killed newspapers everywhere. Now things are much worse. When I first worked for The Republic, there was a staff of more than 500. Now, 10 years after my retirement and decimated by corporate restructuring and vain attempts to figure out digital journalism, the staff is under 150. I retired just in time. 

Looking back, though, I realize that every job I’ve ever had has had its share of oddballs. 

The first job I had, in my senior year at college, was on the groundskeeping team at school. It was full of eccentrics, mostly Quakers fulfilling their alternative service as conscientious objectors during the Vietnam war. One day, Bruce Piephoff and I were trimming the hedges at the front gate and he lit up a joint and offered me one. Traffic streamed in front of us, but he didn’t seem to mind. A few years later, Piephoff robbed a restaurant, grabbing everything he could from the till and then walking up the street throwing the cash at anyone he passed. He seems to have done well since then, now a singer and recording artist. 

Later, I worked at a camera store. My manager was Bill Stanley, who looked rather like Groucho in his You Bet Your Life days. Stanley chewed on a cigar all day, turning it into a spatulate goo. He had an improvisatory relation with the English language. When an obnoxious customer began spouting stupid opinions, Stanley yelled at him, “You talk like a man with a paper asshole.” When someone asked about the big boss, Stanley told her, “He came through here like a breeze out of bats.” Every day there were new words in new orders. 

When I worked at the Black weekly newspaper, the editor was a drunk named Mike Feeney, who had once worked at the New York Times and I would see him daily sitting at his desk surrounded by a dozen half-finished paper cups of coffee, some growing mold, and he would be filling out the Times crossword puzzle, in ink! And he would finish it before ever getting to the “down” clues. He gave me my first lessons as a reporter. “What reporting is,” he said, “is that you call up the widow and you say, ‘My condolences, I’m sorry that your husband has died, but why did you shoot him?’” 

The zoo in Seattle was also full of crazies. There was Bike Lady, Wolf Man, Gorilla Lady. And the kindly old relief keeper, Bill Cowell. One day, the place was full of kids running around screaming, spilling soda pop and popcorn, and Bill leaned over to me, “Don’tcha just wanna run them over?” 

And I finally got to be a teacher, in the art department of a two-year college. The art staff was especially close, and we had dinner together about once a week. There were some great parties. A Thanksgiving with a contest to make sculpture out of food. The winner was an outhouse made from cornbread, with a graham cracker door and a half a hard-boiled egg as a privy seat. I made a roast chicken in the form of Jackie Gleason, with a pear attached as his head. Another time the drawing teacher, Steve Wolf helped us put on a shadow-puppet show. He had us falling on the floor with the most obscene performance he called, “The Ballerina and the Dog.” 

And so, I suppose I have always worked with a class of people outside the normal order. So, when I was hired by the Features editor at The Republic and he was wearing Japanese sandals, it hardly registered with me. Mike McKay gave me my first real job in newspapers. 

 But, oh, how I loved my years there. Newspapers everywhere were profit-rich and the paper was willing to send reporters all over to cover stories. I benefited by getting to travel across the country, and even the world. 

I was primarily an art critic — and ran immediately afoul of the local cowboy artist fans when I reviewed the annual Cowboy Artists of America exhibition and sale at the Phoenix Art Museum. It was one of the major events on the social calendar, when all the Texas oil millionaires would descend on Phoenix to buy up pictures of cowboys and Indians. 

The event was an institution in the city, but I wasn’t having any of it. I wrote a fairly unfriendly review of the art and got instant pushback. I wrote, among other things, “It’s time, Phoenix, to hang up your cap pistols. It’s time to grow up and leave behind these adolescent fantasies.” And, “their work is just, well, maybe a few steps above black velvet Elvis paintings.” I was hanged in effigy by Western Horseman magazine. It was great fun. 

But my portfolio expanded, and by the end of my sojourn in the desert, I was also dance critic, classical music critic and architecture critic — one of the last things I did was complete a 40,000 word history of Phoenix architecture. I also became back-up critic for theater and film. And I wrote hundreds of travel stories. 

The paper sent me to Boston, New York, Chicago, Miami, San Francisco, Reno, and almost once a year, to Los Angeles. I covered major art exhibits by Van Gogh, Cezanne, Audubon, Jackson Pollock, among others. 

Because Frank Lloyd Wright had a Scottsdale connection, I wrote about him often and got to travel to and write about many of his most famous buildings, including Taliesin in Wisconsin and Falling Water in Pennsylvania. 

Pacific Coast Highway

But the best were the travel stories, as when they let me take 10 days to drive up the Pacific Coast Highway from Tijuana to Vancouver, or another time when I also drove from Mexico to Canada, but along the Hundredth Meridian in the center of the continent — and then down the Mississippi River from Lake Itasca in Minnesota to the Gulf of Mexico. Over several different trips, I cobbled together a series of stories about the Appalachian Mountains from Alabama to the Gaspé Peninsula. 

Mississippi River near Cairo, Ill. 

I had assignments that let me cover all the national parks in Utah, and several excursions to every corner of Arizona. In 1988, I went to South Africa for the paper. 

Indian Ocean, Durban, South Africa

Of course, when Gannett took over, the travel miles shrunk to near zero. They didn’t want to pay for anything they didn’t absolutely have to. 

I left in 2012. The handwriting was on the wall. Thoughtful pieces about art and culture were no longer wanted. We were asked to provide “listicles,” such as “Top 5 things to this weekend.” After I left, I heard from former colleagues how the photography staff was let go, the copy editors were fired — how can you run a newspaper with no copy editors? They are the heart of a newspaper. They saved my butt I don’t know how many times. But no, they are all gone. 

It was a sweet spot I was lucky to have landed on, to be able to observe the old “Front Page” days in their waning glory, and leave when everything was drowning in corporatism. I have often said that if Gannett thought they could make more money running parking garages, they would turn The Republic building into one. 

When I left, a group of colleagues bought and gave me a blog site. I’ve been writing on it ever since — now just under 700 entries — and it proves what I have always said, writers never really retire, they just stop getting paid for it.

In the 1920s, the world of animated cartoons was the Wild West. The same group of animators all worked for each other, began their own studios, worked for other studios, went bankrupt and built new studios. There were Bray Productions, Van Beuren Studios, Terry Toons, Barré Studios. All populated by a circulating cast of the pioneers of movie cartoons: Paul Terry, Walter Lantz, John Bray, Amadee Van Beuren, Pat Sullivan, Ub Iwerks, Vernon Stallings, Earl Hurd, Grim Natwick. And, of course, Max and Dave Fleischer, and Walt Disney. 

Among them, they created a whole series of popular cartoon characters that populated the silent era of animation: Bobby Bumps, Colonel Heeza Liar, Felix the Cat, Oswald Rabbit, Ko-Ko the Clown, Milton the Mouse, Krazy Kat, Farmer Al Falfa, Mutt and Jeff, and an early, non-cat-and-mouse version of Tom and Jerry. 

There were popular series, such as the Aesop’s Fable animal cartoons and Disney’s early Alice cartoons. 

I watched tons of these old, silent cartoons as a kid when they were licensed to TV syndication. In those early years, television was desperate for content and these cartoon mice and cats filled up the after-school hours. They were part of the cultural landscape for early Boomers; they are now largely lost to prehistory and archivists. Some may be found, usually in fuzzy, bad prints, on YouTube. 

By the end of the 1920s, two studios stood on top of the pile. On the West Coast, via Missouri, there was Disney; in New York, there were the Fleischer brothers. 

The two studios were poles apart. Disney was bland and inoffensive; the Fleischers were urban, surreal and ballsy. Disney came up with Mickey Mouse, who, while visually identifiable, is about the most innocuous character in the history of animation. What is Mickey’s character? Well? He has none. 

But in New York, the Fleischers created the “Out of the Inkwell” series, combining live action and animation. In most of these short cartoons, the dominant brother, Max, would open a bottle of ink and out would pop his characters, or he would draw them and they would come to life. His chief character was Ko-Ko the Clown (later, just Koko). 

Max Fleischer was an energetic inventor and came up with many of the techniques since common in animation. He held some 200 patents. Fleischer was an artist and could draw fluently. Disney, by contrast was an indifferent draftsman, but, in contrast with the hapless Fleischers, he was a world-class businessman who went on to found an empire of cartoons, films, TV shows and amusement parks. Disney made his cartoons for a researched audience; the Fleischers made their animations for themselves. By the end of World War II, they could no longer compete. 

But what a run they had, beginning with Koko, followed by Bimbo the dog and then, paydirt — Betty Boop was born. She arrived in this world at the same time as talkies. It is impossible to imagine Betty without her voice. 

Disney often claimed to have made the first animated cartoon with sound, in the 1928 short, Steamboat Willie, with Mickey Mouse. But by then, the Fleischers had been running cartoons with sound for four years. They pioneered the “bouncing ball” sing-along cartoons. And for that, Steamboat Willie is hardly a “talkie” — there is no dialog in it, only sound effects. And compared with Disney’s later slickness, it is a surprisingly crude cartoon. The Fleischers were miles ahead of Disney technically. 

Betty first appeared in the 1930 cartoon Dizzy Dishes. Fleischer asked one of his animators, Grim Natwick, to come up with a girlfriend for their established character, Bimbo, and Natwick whipped up a sexy poodle. She has only a bit part in Dizzy Dishes, singing onstage, while the hero, Bimbo, sees her and falls in love. Betty has long doggy ears and a dark, wet doggy nose. 

Later, Betty evolved into a Jazz-Age flapper with human ears and nose, and a sexy bareback dress, short enough to show off her garter. 

Betty got her own series of cartoons, and from 1930 to 1939, she starred in 90 releases. Her career spanned two eras in early film history — the pre-Code days until 1934, and then the clamp-down by the Catholic Legion of Decency and the Hollywood Hays Code, which put quite a crimp in Betty’s style. 

After her first 30 films, the Code kicked in. In her final pre-Code short, Betty Boop’s Trial (June, 1934), Betty turns her back on the camera, flips up her tiny skirt to show off her panties and bottom. 

It’s hard now to recognize just how shocking and adult the Betty Boop cartoons could be. The Fleischer cartoons were made for grown-up audiences, not just kids. There were sex, violence, gritty urban scenes and even a recognition of the Great Depression. She was, after all, a Jazz-Age independent woman. 

And despite some unfortunate blackface scenes, Fleischer films were surprisingly integrated. African-American musicians Louis Armstrong and Cab Calloway star in several Boop-toons. I don’t want to make too much of this, there are still some horrid African cannibal stereotypes  and some blackface in several of the cartoons. But the Fleischers seem to have been fairly progressive for their time. Some of their cartoons were refused showing in the South. 

And Betty was a Jazz Age Modern Woman — at least before 1934. Racy and risque, she dances a topless hula in Betty Boop’s Bamboo Isle (1932); prances before the fires of hell in a see-through nightie in Red Hot Mamma (1934); has her dress lifted up to show her underwear in Chess-Nuts (1932). 

Betty’s virtue is frequently besieged and there is little subtlety over what her signature phrase means in 1932’s Boop-Oop-A-Doop. Betty is a circus performer and the giant beast of a ringmaster lusts after her. In a creepy Harvey Weinstein move, the ringmaster paws all over Betty and implies that if she doesn’t do what he wants, she will lose her job. Betty sings “Don’t take my boop-oop-a-doop away.” 

By the end, she is saved by Koko the clown and her virginity is safe. “No! He couldn’t take my boop-oop-a-doop away!” 

 In the 1933 Betty Boop cartoon, Popeye the Sailor, she only appears briefly, doing the topless hootchie-coochie dance she did in Bamboo Isle, joined by Popeye onstage, also wearing a grass skirt. The short introduced Popeye in his first appearance in an animated short. 

As Betty became tamed and dowdy after 1934, her popularity waned just as Popeye’s grew, eventually overtaking her as the Fleischer’s prime property. 

By the end, in 1939, Betty had turned into a swing-music figure. Her head-to-body ratio had subsided. Her skirts lengthened and her neckline rose. Then she disappeared.

The Fleischers continued making Popeye cartoons, lost their company to Paramount Pictures, who then fired the brothers. Their last project with Paramount was the stylish Art-Deco inspired Superman cartoons. 

Betty had a resurgence, not as a cartoon star, but as a pop-culture star and even a feminist icon, in the 1980s, with a merchandising boom. She then appealed to a generation that may not even have known she had been an animated cartoon star. 

__________________________

“Don’t take my boop-oop-a-doop away!”

— “Boop-Oop-A-Doop,” 1932

__________________________

When Betty Boop was introduced in 1930, Fleischer animator Grim Natwick based her look and sound on popular singer Helen Kane, who had a baby-voice and scatted as she sang. In 1932, Kane sued the Fleischers over their use of her image. It came to trial in 1934. 

“Your honor,” said Kane’s lawyer to Justice Edward J. McGoldrick, “we contend this character [Betty Boop] has Miss Kane’s personality her mannerisms her plumpness, her curls, her eyes, and that she sings the songs Miss Kane made famous.” A style Kane called the “baby vamp.” 

Kane (1904-1966) was born in New York and by age 15 was performing professionally. She achieved popularity in the 1920s on stage in vaudeville and on Broadway. She recorded 22 songs and made seven films in Hollywood. Her trademark was a squeaky baby voice and scat singing. But her style of singing was going out of fashion by the early 1930s, although she continued to find work onstage. When Betty Boop came out, she sued the Fleischers for $250,000 (equivalent to about $5 million today). The trial lasted two weeks and filled the newspapers with juicy stories. 

Betty Boop was described in the court case as “combin[ing] in appearance the childish with the sophisticated — a large round baby face with big eyes and a nose like a button, framed in a somewhat careful coiffure, with a very small body of which perhaps the leading characteristic is the most self-confident little bust imaginable.”

Kane’s lawyers made a tactical mistake by basing their claim on the fact that she uses scat syllables in her singing, including the famous “Boop-oop-a-doop.” Kane claimed to have invented the practice of singing nonsense words to music, a claim too easily disproven. 

The Fleischers’ lawyer demonstrated that Kane had seen a juvenile Black performer, Baby Esther Jones, who shared a manager with Kane, and that Baby Esther (also sometimes called L’il Esther) used the same scat singing and baby voice before her. 

Kane caricature and Sweet Betty

Trial transcripts can be quite a kick in the pants to judicial dignity. Asked what Baby Esther did on stage, the manager said, “She sang the chorus and during her choruses, we had four bars omitted, which we called the break, of so-called ‘hot licks.’” Question: “During those breaks or ‘hot licks,’ what did Baby Esther do?” Answer: “A funny expression in her face and a meaningless sound.” Q: Will you tell us what those sounds were?” A: “At various times they differed, they sound like ‘boop-boop-a-doop,’ and sometimes…” Kane’s lawyer then objects. So Fleischer’s lawyer rephrases:

“Give us as nearly as you can how they sounded?” A: “I could do it better if I had rhythm with it.” Q: “Give us the sounds.” A: “Boo-did-do-doo.” Q: “Where there other sounds besides the one that you have just mentioned?” A: “Yes, quite a few.” Q: “Will you give us as many as you can remember?” A: “Whad-da-da-da” Q: “Others.” A: “There are quite a few — ‘Lo-di-de-do’” Q: Any others that you recall?” A: “Sounds like a time she would make a sound like sort of a moaning sound, finished off with ‘de-do’” 

According to one newspaper account, “That’s when the court stenographer threw up the sponge and admitted he couldn’t spell such things.” 

At one point, film of Betty Boop and film of Helen Kane were shown, without sound, to compare their styles. 

Another newspaper account reported “Except for the occasional throat-clearings of a roomful of attorneys, it was strictly a silent performance, the court having ruled agains any audible ‘booping.’

“Miss Kane’s attorneys strove vainly to have the sound tracks included, saying they wished to show how Betty Boop has ‘simulated our voice and our style of singing,’ but Justice McGoldrick ruled that any ‘booping’ would be incompetent, immaterial and irrelevant.”

Kane with three women who voiced Betty Boop in the cartoons

The fact that Betty Boop was clearly based on Kane (later readily admitted by Boop designer Grim Natwick) hardly mattered. The judge ruled that the sound of a voice cannot be copyrighted, and that the nonsense syllable singing was quite common beyond its use by Kane and found for the Fleischers. 

Three of the women who voiced Betty Boop in the Fleischer cartoons had all been participants earlier in a Helen Kane imitation contest. They had all performed, outside the Boop-toons as Kane knock-offs. 

The knowledge that Betty Boop imitated the White Kane who imitated the Black Baby Esther has recently raised the specter of “cultural appropriation” — a concept that has now become something of an ethical fad. I know of few things sillier or more pointless. 

After all, Baby Esther was known as a “Little Florence Mills,” imitating that singer. The actual Mills took over on Broadway from Gertrude Saunders in the 1921 hit, Shuffle Along. Each performer borrowing from the previous. 

In the Fleischer trial, the famous pianist and composer, Clarence Williams testified that he’d been using the scat technique since 1915. Very few things have virgin birth; most things are developments of other things. We could make the claim that scat began with Stephen Foster and Camptown Races and its “doo-dah, doo-dah” or trace it back to the 16th century and Josquin de Prez’s frotolla El Grillo, where the singer imitates the sound of a cricket. 

As the Fleischer trial judge ruled, “the vocables ‘boop-boop-a-doop’ and similar sounds had been used by other performers prior to the plaintiff.” 

The whole issue of cultural appropriation is bothersome, to say the least. I am not talking here about cases such as when an Anglo artist sells his work as Indian art, pretending to be Native American. That isn’t cultural appropriation, it is simply fraud. But when an artist finds something from another culture that piques his interest and creativity, well, that’s just normal. Everybody is always borrowing from everybody else. It is how culture moves forward. 

Is spaghetti cultural appropriation because the tomato came from indigenous cultures in the New World? 

Mexican Japanese and Filipino spaghetti

How about when spaghetti makes the journey back to the Americas and becomes Mexican chipotle spaghetti. Or further travels to Japan with added daikon, or to the Philippines, where they add hot dogs (pace Sheldon Cooper)? 

In the Columbian Exchange, the New World gave the Old not only tomatoes, but corn (maiz), cacao, vanilla, potatoes and tobacco. 

The Old gave back to the New grapes, onions, apples, wheat, to say nothing of swine, cattle, horses and honey bees. 

So, today, there is nothing more typically Navajo than satellite dishes and pickup trucks. 

It’s all a great mix, and to forbid such churn is to stall human progress. Culture is never static but always on the move. “Traditional” is always a museum-piece. 

I’m not making a case here for blackface minstrelsy — such things are rightfully seen as appalling. But should that mean that Vanilla Ice should not rap? Or that Jessye Norman shouldn’t sing opera? That we should all be stuck in our particular silos and never learn from others? 

George Harrison should never have learned the sitar? That Sergio Leone should have left the plot of A Fistful of Dollars to Akira Kurosawa? 

That Cubism should be trashed because Picasso became fascinated by African masks? 

This isn’t to say there aren’t egregious examples, but they mostly concern stereotyping —  which is to say, ignorance, a failure to see what is actually going on in the other culture. finding something good and useful in another culture and adapting it is rather different, even if you take the original completely out of context. 

Cross-fertilization is not only one of the pleasures of culture, but one of its essentials. Culture is a group enterprise, not an individual one, and it lives through free exchange. 

The current blather about cultural appropriation reminds me, more than anything else, of the Victorian fear of the body and sex, like calling legs “limbs.” It is blue-stocking puritanism and to hell with it. 

Click on any image to enlarge