Cartoonist Reg Manning said you could take in all of Arizona in three great bites. Manning was the Pulitzer-Prize winning political cartoonist for The Arizona Republic from 1948 to 1971. And he wrote a book, What is Arizona Really Like? (1968), a cartoon guide to the state for newcomers (he also wrote a cartoon guide to the prickly vegetation of the state, What Kinda Cactus Izzat? Both required reading for any Arizonan.) 

Anyway, his introduction to the state bites off three large chunks: the Colorado Plateau in the north; the mountainous middle; and the desert south. It is a convenient way to swallow up the whole, and works quite well. 

I spent a third of my life in Arizona and I traveled through almost every inch of the state, either on my own or for my newspaper, and I never found a better explanation of the state (and state of mind) than Manning’s book. Admittedly, the book can be a bit corny, but its basis is sound. 

But my life has been spent in other parts of the country as well, and I came to realize that Manning’s tripartite scheme could work quite as well for almost any state. 

I now live in North Carolina, which is traditionally split in three, with the Atlantic Coastal Plain in the east; the Piedmont in the middle; and the mountains in the west. The divisions are quite distinct geologically: The escarpment of the Blue Ridge just up from the Piedmont, and the coastal plain begins with the Fall Line — a series of dams, rapids and waterfalls that long ago provided the power for industry. 

But that is hardly the only example. I grew up in New Jersey, which could easily be split into the crowded suburban north, where I grew up; the almost hillbilly south, which is actually below the Mason-Dixon Line, and where the radio is full of Country-Western stations (and who can forget the “Pine Barrens” episode of The Sopranos?); and finally, the Jersey Shore, a whole distinct universe of its own. 

And when I lived in Seattle, Washington state was clearly divided into the empty, dry east; the wet, populated coast; divided by the Cascade Mountains. Oregon was the same. It divided up politically the same way: a redneck east, a progressive west, and a mountain barrier between. 

So, I began looking at other states I knew fairly well. South Carolina and Georgia follow North Carolina with its mountains (“upcountry” in South Carolina), its Piedmont and its coast. Even Alabama does, although its coastal plain borders the Gulf of Mexico. 

Florida has its east coast; its west coast; and its panhandle — all quite distinct in culture. Michigan has its urban east, its rural west and then, hardly part of the state, the UP — Upper Peninsula. There’s lakefront Ohio, riverfront Ohio and farmland Ohio. 

Maine has a southern coast that is prosperous and filled with tourists; a northern coast (“Down East”), which is sparsely populated and mostly poor; and a great interior, which is all lakes, forests and potatoes. 

Massachusetts has its pastoral western portion, with its hills and mountains; its urban east, centered on Boston; and then there’s Cape Cod, a whole different universe. 

Heck, even Delaware, as tiny as it is, has its cities in the north, its farms on the Delmarva Peninsula and its vacationland ocean shores. 

Go smaller still. Draw a line down the center of Rhode Island and everything to the west of the line might as well be Connecticut. For the rest, Providence eats up the northern part and south of that, Rhode Island consists of islands in Narraganset Bay. 

Colorado has its Rocky Mountains and its eastern farmlands, separated by the sprawling Denver metropolitan area. 

But I don’t want to go through every state. I leave that to you. Indiana has its rural south, its urban midlands with Indianapolis, and that funky post-industrial portion that is just outside Chicago. Oy. 

Yet, as I looked at that first state, defined by Manning’s cartoons, I realized that each third of Arizona could be subdivided into its own thirds. This was getting to be madness. 

The Colorado Plateau is one-third Indian reservation, both Navajo and Hopi; one-third marking the southern edge of northern Arizona in what might be called the I-40 corridor of cities and towns from Holbrook through Flagstaff and on through Williams to Kingman; and a final third that encompasses the Grand Canyon and the remote Arizona Strip. 

The mountainous middle third of the state includes the Mogollon Rim and its mountain retreats, such as Payson; another third that is the Verde Valley; and a finishing touch the Fort Apache and San Carlos Indian reservations.

Finally, in the south and west, there is the urban spread from just north of Phoenix and continuing south through Tucson and that nowadays continues almost to Nogales and Mexico; there is the Chihuahuan Desert portions in the southeast, from Douglas through the Wilcox Playa; and in the southwest, the almost empty desert including the Tohono O’odham Indian Reservation, the Barry Goldwater bombing range, and up to the bedraggled haven of trailer parks that is Quartzsite. And no, I’m not forgetting Yuma. 

It’s a sort of “rule of thirds” applied to geography. It seems almost any bit of land can be sliced in three. Phoenix itself, as a metro area, has the East Valley, including Mesa, Scottsdale and Tempe; central Phoenix (which itself is divided into north Phoenix, the central Phoenix downtown, and the largely Hispanic south Phoenix), and the West Valley, which nowadays perhaps goes all the way up through Sun City to Surprise. 

Here in North Carolina, the Coastal Plain runs through the loblolly pines and farmland of eastern North Carolina; into the swampy lowlands of marshy lakes and tidal rivers; and on to the Outer Banks and the ocean. Another tri-partite division. The Piedmont has its 1) Research Triangle; 2) its Tri-city area of Greensboro, High Point and Winston-Salem (extending out to Statesville and Hickory); and 3) the Charlotte metro area; and the mountains include first the northern parts of the Blue Ridge, around Boone and Linville Falls; second, the Asheville area, which is a blue city in a red state; and finally the southern mountains around the Great Smokies. Thirds, thirds, thirds. 

Even Asheville, itself, comes in three varieties: East Asheville (where I live); downtown (where the drum circle is); and West Asheville (where the hippies live). West Asheville is actually south of downtown. Why? I dunno. 

You can go too far with this and I’m sure that I have. After all, it’s really rather meaningless and just a game. But you can divide California into thirds in three completely different ways. 

First, and too easily, there is Southern California; central California; and northern California. Each has its culture and its political leanings. But you can also look at it as desert California, including Death Valley and Los Angeles; mountain California, with the Sierra Nevada running like a spine down the long banana-shaped state; and populated California north of LA and in the Central Valley. 

Finally, you can split California into rural, farming California, that feeds the nation from the Central Valley; wilderness California with deserts and mountains; and entertainment California, from Hollywood and LA up through San Francisco and Skywalker Ranch (and all the wine) that keeps America preoccupied with vino et circenses

The U.S. as a whole is often looked at as the East, the Midwest and the West. The East then subdivides as the Northeast, the Middle Atlantic States and the South; The Midwest has its Rust Belt, its Corn Belt, and its Wheat Belt. The West has its Rocky Mountain States, its Pacific Coast states and, well, Texas. 

And, I suppose if you look at the world in toto, you have the West, including Europe, North America and Australia; you have Asia, or the East, which includes China, Asian Russia, and most of the Muslim nations; and the Third World, which comprises most of the rest. You can quibble over Japan as Asian or a First World Nation; and India seems caught between, with growing prosperity and growing poverty at the same time. 

These distinctions are coarse and could well be better defined and refined. And I mean nothing profound — or even very meaningful — with this little set of observations. It is an exercise in a habit of thinking. If anything, I just mean it as a counterbalance to the binary cultural prejudice of splitting everything up into pairs. There is a countervailing cultural pattern that prefers threes to twos. I wrote about this previously in a different context (link here). 

We think in patterns, and well-worn templates. But the world doesn’t often present itself in patterns. The world lacks boundary lines and the universe is a great smear of infinite variety. The mental template allows us to organize what is not, in reality, organized. The most pervasive template is the binary one, but we are entering an increasingly non-binary culture. Of course, thirds is only one alternative pattern. Perhaps best is to ignore patterns and look fresh at evidence. 

The patterns are roadmaps for thought, and we can too easily take the easy route and fit the evidence to the pattern rather than the reverse. 

The Seventeenth Century produced in Europe giants of science and philosophy and brought to birth the beginnings of Western Modernism. Their names are a pantheon of luminaries: Francis Bacon; Galileo Galilei; Thomas Hobbes; Rene Descartes; Blaise Pascal; Isaac Newton; Johannes Kepler; Gottfried Wilhelm Leibnitz; Baruch Spinoza; John Locke — names that mark the foundations of the culture we now live in. 

But during their lifetimes, their pioneering work remained the province of a rare sliver of humankind, those others of their intellectual gift who could understand and appreciate their thought. The mass of European population remained illiterate, and subject to centuries-old traditions and institutions of monarchy and religion. It wasn’t until the next century that the dam broke and the results of rationalism and empiricism made a wide splash in society, in a movement that self-congratulated itself as The Enlightenment.

And in the center of it all, in France, was Denis Diderot, one of the so-called “philosophes,” a group of writers and thinkers advocating secular thinking, free speech, the rights of humans, the progress of science and technology, and the general betterment of the human condition. 

Among the philosophes were Voltaire, Montesquieu, Abbé de Mably, Jean-Jacques Rousseau, Claude Adrien Helvétius, Jean d’Alembert, the Marquis de Condorcet, Henri de Saint-Simon, and the Comte de Buffon. They wrote about science, government, morals, the rights of women, evolution, and above all, freedom of speech and freedom from dogma. They advocated the expansion of knowledge and inquiry. And they didn’t write merely for other intellectuals, but for a wider, middle-class literate readership. It was a blizzard of books, pamphlets and magazines.

Denis Diderot was born in 1713 in the Champagne region of France, the son of a knife-maker who specialized in surgical equipment. His father expected him to follow in the family business, but Diderot first considered joining the clergy before studying for the law and by the early 1740s, had dropped out to become a professional writer, a metier that paid little and brought him into conflict with the royal censors with notorious frequency. 

He translated several works, including a medical dictionary, and in 1746, he published his Pensées Philosophiques (“Philosophical Thoughts”), which attempted to reconcile thought and feeling, along with some ideas about religion and much criticism of Christianity. 

He wrote novels, too, including  in 1748, the scandalous Les Bijoux Indiscrets (“The Indiscreet Jewels,” where “jewels” is a euphemism for vaginas), in which the sex parts of various adulterous women confess their indiscretions to a sultan who has a magic ring that can make vaginas talk. 

His most famous and lasting novel is Jacques le Fataliste et son Maître (“Jacques the Fatalist and his Master”), from 1796, a picaresque comedy in which the servant Jacques relieves the tedium of a voyage by telling his boss about various amorous adventures. 

But Diderot is remembered primarily for his work on the Encyclopédie, which he edited along with Jean le Ronde d’Alembert, and for which he wrote some 7,000 entries. It was published serially and periodically revised from 1751 to 1772 and mostly published outside of France and imported back in — censorship was strict and many books were published in the Netherlands or Switzerland to avoid French government oversight. 

In fact, Diderot spent some months in prison for his work on the Encyclopédie

There had been earlier attempts at encyclopedias, including Ephraim Chamber’s Cyclopedia, or an Universal Dictionary of Arts and Sciences published in London in 1728, and John Harris’ 1704 Lexicon Technicum: Or, A Universal English Dictionary of Arts and Sciences: Explaining Not Only the Terms of Art, But the Arts Themselves. The 18th century wallowed in long book titles. 

Among the projects of this age with an appetite for inclusiveness was Samuel Johnson’s Dictionary of the English Language of 1755. 

And the idea of binding all of human knowledge up in a single volume goes all the way back to the Naturalis Historia of Pliny the Elder in the First Century CE. 

But none of these were as compendious in intent as the French Encyclopédie, which initially ran for 28 volumes and included 71, 818 articles and 3,129 illustrations. It comprised some 20 million words over 18,000 pages of text. It was a huge best-seller, earning a profit of 2 million livres for its investors. 

In his introduction, Diderot wrote of the giant work, “The goal of an Encyclopédie is to assemble all the knowledge scattered on the surface of the earth, to demonstrate the general system to the people with whom we live, & to transmit it to the people who will come after us, so that the works of centuries past is not useless to the centuries which follow, that our descendants, by becoming more learned, may become more virtuous & happier, & that we do not die without having merited being part of the human race.”

In the article defining “encyclopedia,” Diderot wrote that his aim was “to change the way people think.” 

Their goal was no mean or paltry one, but to encompass everything known to humankind. So that, according to Diderot himself, if humankind descended once again into a Dark Age, and if just one copy of his Encyclopédie survived, civilization could be reconstructed from reading its pages. 

A good deal of its content concerned technical issues, such as shoe-making or glass blowing. But other articles addressed political and religious ideas. These are what got the Encyclopédie contributors into legal trouble. The Catholic church and the monarchy were not happy about the generally deist and republican leanings of its authors. 

And there were a lot of authors. Most of the leading philosophes wrote one or another of the entries. Louis de Jaucort wrote some 17,000 of them — about a quarter of the total. Each of the contributors wrote about his specialties. D’Alembert, who was a mathematician, wrote most of the math entries. Louis-Jean-Marie Daubenton took on natural history. Jean-Jacques Rousseau wrote about music and political theory. Voltaire on history, literature and philosophy. 

All under the editorship of Diderot and d’Alembert, and, after 1759, by Diderot alone. 

Diderot divided all of human knowledge into three parts: memory; reason, and imagination. In his Preliminary Discourse to the Encyclopedia of Diderot, d’Alembert explained these as “memory, which corresponds with History; reflection or reason, which is the basis of Philosophy; and imagination, or imitation of Nature, which produces Fine Arts. From these divisions spring smaller subdivisions such as physics, poetry, music and many others.” 

In fact, d’Alembert asserts, all of human knowledge is really just one big thing: a unified “tree of knowledge,” which if we could grasp, would explain everything with a single simple principle, which rather prefigures the unified field theory of modern physics. 

It would be hard to overemphasize the influence of the Encyclopédie in the 18th century and in the political changes of France up to and through the Revolution. The Encyclopédie disparaged superstition, of which they counted religion as an example, and it saw the purpose of government to be the welfare of its people and the authority of government to be derived from the will of its citizens. The king existed, they said, for the benefit of the people, and not the people for the benefit of the monarchy. 

It’s no wonder, then, that the church and the aristocracy tried to suppress parts of the Encyclopédie, and that many of its authors spent time in prison. 

Its successor, the Enyclopedia Britannica, wrote of Diderot’s labors, “No encyclopedia perhaps has been of such political importance, or has occupied so conspicuous a place in the civil and literary history of its century.” 

Beyond the Encyclopédie, Diderot continued as a freelance writer, as an art and theater critic, a playwright, novelist, political tract writer and freethinker. 

But despite his fame and productivity, Diderot never made much money from his work, and when Russian empress — and groupie to the philosophes — Catherine the Great, heard of his poverty, she offered to buy his extensive library, paying him an enormous sum for the books and as salary for his employment as librarian to his own collection. 

In 1773, Diderot traveled to St. Petersburg to meet Catherine. Over the next five months, they talked almost daily, as Diderot wrote, “almost man-to-man,” rather than monarch to subject. 

Catherine paid for his trip in addition to his annuity and in 1784, when Diderot was in declining health, Catherine arranged for him to move into a luxurious suite in the rue de Richelieu, one of the most fashionable streets in Paris. He died there a year later at the age of 70. 

Despite her admiration for Diderot and his revolutionary ideas, Catherine ignored all of them in her own autocratic rule of Russia. But Diderot and his Encyclopédie pointed the way to the Declaration of the Rights of Man and of the Citizen, the triumph of democracies, and even the American Declaration of Independence and Constitution. 

According to philosopher Auguste Comte, Diderot was the foremost intellectual in an exciting age, and according to Goethe, “Diderot is Diderot, a unique individual; whoever carps at him and his affairs is a philistine.” 

 —————————————————————-

“On doit exiger de moi que je cherche la vérité, mais non que je la trouve.”

“I can be expected to look for truth but not that I should find it.”

—Denis Diderot, Pensées Philosophiques (1746)

 ————————————————————

When I was a young man, more than a half century ago, I had a simple ambition: To know everything. I suppose I was thinking mainly of facts; I would have no inkling of anything that couldn’t be named and catalogued. I wanted to read everything, name every bird and wildflower, every tree and understand every philosopher. I read all the poetry I could find, listened to all the symphonies and quartets and attempted to ingest all of astronomy and physics and history. Yes, I was an idiot. 

As early as the second grade, I believed that when I got to college, then I would finally have access to everything. And so, when I went to Guilford College in North Carolina, I couldn’t wait and in my first semester, I signed up for 24 credit hours of courses. I had to get permission from the dean for the extra hours above the normal 18 that for most students was a full course load. I grabbed Ancient Greek language, astronomy, Shakespeare, the history of India, esthetics, music theory — and over four years, everything I could think of. 

To my surprise and disappointment, not all of it was as edifying as I had hoped and not all the professors as brilliant as I had imagined. Still, it was a lot better than high school. 

What I sought was knowledge that was encyclopedic, encompassing all there was to know. Yes, I know now that all this was silly. I was young, naive and idealistic. Always a poor combination. 

The match that ignited this quest was probably the first actual encyclopedia I had. When I was in grade school, our next door neighbor, who worked for Doubleday publishing in New York, gave us boxes of books, mostly old, and that included a full set of Compton’s Pictured Encyclopedia — probably the set our neighbor had when he was a boy. It dated from the 1930s and had great imagination-burning articles on such things as “The Great War,” dirigibles, and steam locomotives. The endpapers of each volume included illustrations of such things as elevated roads, autogyros, and speedboats. 

It didn’t matter that much in the set was out of date. It was a multi-volume key to unlock a whole world.

Later, our parents bought a more up-to-date Funk and Wagnalls encyclopedia, purchasing a single volume each week through a promotional deal at the A&P supermarket. It was a much cheaper production, on cheaper paper, with blank endpapers, but at least it included the Second World War. 

All through my childhood and adolescence, I would grab a volume and randomly read entries. I would pore over its pages, reading it all for fun. When I had to write a term paper in high school, I did my research in our Funk and Wagnalls. 

I can’t say I read every article in the whole encyclopedia, but I may have come close. 

And as I grew, my ambition grew: I wanted, more than anything, to own the two great compendia of all human knowledge: The Oxford English Dictionary and the Encyclopedia Britannica. Both were well out of my price range, but I lusted, the way most boys my age lusted after Raquel Welch. 

Years later, after college, the OED was published in a two-volume compact form, with microscopic print and a magnifying glass to read it, and I managed to get a copy through signing up for a book-of-the-month club. I still have it, and I still browse through it to find random words, their histories and the curious way language changes over the years. 

The Britannica took longer. The Encyclopædia Britannica, or, A Dictionary of Arts and Sciences, compiled upon a New Plan, was published in Edinburgh first in 1768, as an answer to Diderot’s Encyclopédie. At first, it was bound in three equally sized volumes covering: Aa–Bzo; Caaba–Lythrum; and Macao–Zyglophyllum. There have been 15 editions since, but each edition was continually updated, making the Britannica a constantly evolving entity. It was briefly owned by Sears and Roebuck, and eventually migrated to the University of Chicago. Currently it is privately owned and only available digitally. They stopped printing it in 2010. 

It wasn’t until I was in my 30s, when working as a teacher in Virginia, I found an old used set of Britannicas at a giant book sale held annually in the city’s convention center. It was an 11th Edition version — still the standard as most desirable edition. I felt like Kasper Gutman finally getting his hands on the Maltese Falcon. But when I unwrapped my prize, it was, in fact, the real thing. 

It sat, in pride of place, on my bookshelves, more as trophy than anything else. And when we moved to Arizona, I had to give it up in the great divestment of worldly goods necessary to truck our lives across a continent. I hated to give it up, but had to admit, I wasn’t using it as much as I had expected. I had an entire library of other books that I could consult. 

Then, in Arizona, I came across a more recent edition of the Britannica for sale at Bookman’s, a supermarket-size used book store in Mesa, Ariz. It was the version divided into a “macropedia” and “micropedia.” I bought it to replace the earlier version I had once coveted. 

I have never warmed to this version of the encyclopedia — a smaller set with simpler, introductory articles about a wider range of subjects, and a longer set with in-depth scholarly articles about a smaller range of more commonly referenced subjects. It felt dumbed down — and worse, confusing, because you could never quite tell if you should first consult the micro- or the macro- section of the series. 

But at least, I still owned a Britannica, and felt that somehow, I possessed, if not the actual knowledge of the universe, at least access to it.

The end of Britannica was also the end of my obsession with it. With the advent of Wikipedia, I no longer needed to shuffle through pages of multiple volumes, sort through indexes, or cross-reference material. In researching a story for my job as art critic with the newspaper, I could just go online and get the birth date of Picasso or the list of art at the Armory Exhibit of 1913. Wikipedia was easier to use, and for my purposes, just as accurate as my beloved Britannica. 

And so much easier to use. I cannot now imagine being a writer without Wikipedia. If I need a date or check spellings, it is instantly available. 

And just as I spent time as an adolescent swimming through my Compton’s or Funk and Wagnalls, reading random articles for the fun of it, I now spend some portion of my time sitting in front of my computer screen hitting the “random article” button on Wikipedia to read about things I wouldn’t have known to be interested in. Lake Baikal? Yes. Phospholipidosis? It is a “lysosomal storage disorder characterized by the excess accumulation of phospholipids in tissues.” De Monarchia? A book by Dante Alighieri from 1312 about the relationship of church and state, banned by the Roman Catholic Church. I know of some politicians who might profit by reading Dante. 

It’s fun picking up random bits of information like this. But it also demonstrates why my interest in owning all the world’s knowledge in book form has evaporated. 

First, the cosmos is infinite and packing 20 volumes of an encyclopedia with information about it is really like taking a teacup to the ocean. Second, knowledge keeps changing and growing. What we thought we knew a hundred years ago has been replaced by more complete data and theory — and so knowledge is not so much a teacup as a sieve. 

Then, there is the even more knotty problem, that knowledge isn’t even the most important part of understanding. Facts are good, and I wouldn’t want to be without them, but infinitely more essential is the interrelationship between them; the complexity of human mind as it interacts with what it knows, or thinks it knows; the moiling stew that is the mix of thought and emotion; the indistinct borders of learning and genetic inheritance; the atavistic tribalism that seems to overcome any logic; the persistence of superstition, magic and religion in how we understand our Umwelt; and ultimately, the limitations of human understanding — how much more is there that we not only don’t know, but cannot know, any more than a goldfish can understand nuclear fission. 

The reality of our existence is both infinite and unstable. Trapping it in print is an impossibility. It swirls and gusts, churns and explodes. Any grasping is grasping handfuls of air. We do our best, for the nonce, and must be satisfied with what we can discern in the welter. 

I think of Samuel Johnson’s heartbreaking preface to his 1755 Dictionary, which every thoughtful person should read and lock to mind. “To have attempted much is always laudable, even when the enterprise is above the strength that undertakes it: To rest below his own aim is incident to every one whose fancy is active, and whose views are comprehensive; nor is any man satisfied with himself because he has done much, but because he can conceive little. … When I had thus enquired into the original of words, I resolved to show likewise my attention to things; to pierce deep into every science, to enquire the nature of every substance of which I inserted the name, to limit every idea by a definition strictly logical, and exhibit every production of art or nature in an accurate description, that my book might be in place of all other dictionaries whether appellative or technical. But these were the dreams of a poet doomed at last to wake a lexicographer. … I saw that one enquiry only gave occasion to another, that book referred to book, that to search was not always to find, and to find was not always to be informed; and that thus to pursue perfection, was, like the first inhabitants of Arcadia, to chase the sun, which, when they had reached the hill where he seemed to rest, was still beheld at the same distance from them.”

Amen.

I have no belief in ghosts, spirits or ouija boards and I don’t believe that the past hangs on to the present to make itself palpable. But I have several times experienced a kind of spooky resonance when visiting certain famous battlefields. 

The thought re-emerged recently while watching a French TV detective show that was set in Normandy, and seeing the panoramas of the D-Day landing beaches. I visited those beaches a few years ago and had an overwhelming rush of intense sadness. It was inevitable to imagine the thousands of soldiers rushing up the sands into hellish gunfire, to imagine a thousand ships in the now calm waters I saw on a sunny day, to feel the presence in the concrete bunkers of the German soldiers fated to die there manning their guns. 

The effect is entirely psychological, of course. If some child with no historical knowledge of the events that took place there were to walk the wide beach, he would no doubt think only of the waves and water and, perhaps, the sand castles to be formed from the sand. There is no eerie presence hanging in the salt air. The planet does not record, or for that matter, much note, the miseries humans inflict on each other, and have done for millennia. 

But for those who have a historical sense, the misery reasserts itself. Imagination brings to mind the whole of human agony. 

Perhaps I should not say that the earth does not remember. It can, in certain ways. Visiting the woods of Verdun I saw the uneven forest floor, where the shell craters have only partially been filled in. Once the trees were flattened by artillery, leaving the moonscape littered with corpses. The trees have grown back, but the craters are still discernible in the wavy forest floor. 

This sense came to me first many years ago visiting the Antietam battlefield in Maryland. There is a spot there now called Bloody Lane. Before Sept. 17, 1862, the brief dirt drive was called the Sunken Road, and it was a shortcut between two farm roads near Sharpsburg, Md. All around were cornfields rolling up and down on the hilly Appalachian landscape.

The narrow dirt road, depressed into the ground like a cattle chute, now seems more like a mass grave than a road. And it was just that in 1862, when during the battle of Antietam Creek, Confederate soldiers mowed down the advancing Federals and were in turn mowed down. The slaughter was unimaginable.

You can see it in the photographs made a few days after the battle. The soldiers, mostly Southerners, fill the sunken road like executed Polish Jews. It was so bad, as one Union private said, “You could walk from one end of Bloody Lane to the other on dead soldiers and your feet would never touch the ground.”

Even today, with the way covered with crushed blue stone, the dirt underneath seems maroon. Perhaps it is the iron in the ground that makes it so; perhaps it is the blood, still there after 160 years.

Antietam was the worst single day of the Civil War. Nearly 23,000 men were killed or wounded. They were piled like meat on the ground and left for days before enough graves could be dug for them. There were flies, there was a stench. The whole thing was a fiasco, for both sides, really.

But all these years later, as you stand in Bloody Lane, the grassy margins of the road inclining up around you and the way lined with the criss-cross of split-rail fencing, it is painful to stand in the declivity, looking up at the mound in front of you, covered in cornstalks in a mid-July day. You can see that when the Yankees came over the rise, they were already close enough to touch. There was no neutralizing distance for your rifle fire to travel, no bang-bang-you’re-dead, no time, no room for playing soldier. Your enemy was in your face and you had to tear through that face with lead, the blood splattered was both Federal and Confederate, in one red pond among the furrows. In four hours on 200-yard stretch of Bloody Lane, 5,000 men were blown apart.

It is difficult to stand in Bloody Lane and not feel that all the soldiers are still there, perhaps not as ghosts, but as a presence under your boot-sole, there, soaked into the dirt.

It is almost, as some cultures believe, as if everything that happens in a place is always happening in that place. The battle was not something that occurred before my great-grandfather was born, but a palpable electricity in the air. You can not stand there in Bloody Lane and not be moved by that presence.

A similar wave of dismay overcame me at several Civil War sites: Shiloh; Vicksburg; Fredericksburg; Cold Harbor; Petersburg; Appomattox. Always the images rise in the imagination. Something epochal and terrible happened here. 

Visiting the Little Bighorn Battlefield in Montana, there are gravestones on the slope below the so-called “Last Stand,” but you also look down into the valley where the thousands of Sioux and Cheyenne were camped. 

I’ve visited Sand Creek and Washita. And Wounded Knee. That was the most disturbing. You travel through the Pine Ridge Reservation and the landscape is hauntingly beautiful, then you pull into the massacre site and you see the hill where the four Hotchkiss guns had a clear shot down into the small ravine where the victims huddled. The sense of death and chaos is gripping. The famous image of the frozen, contorted body of Big Foot  glowers in the imagination. It feels like it is happening in a past that is still present. 

This sense of horror and disgust wells up because of the human talent for empathy. Yes, I know full well that there are no specters of the victims waiting there for me, but my immediate sense of brotherhood with them resurrects them in my psyche. I am human, so I know that those dead were just like me. I can imagine myself bowel-loosening scared seeing my comrades to either side being blown to pieces and an enemy who I’ve never met and might have been friends with races toward me with bayonet stretched in front of him, eyes wide with the same fear. 

History is an act of the imagination. The most recent may be memory, but for me to know what my father went through in France and Czechoslovakia in World War II requires my identification with him, my psyche to recognize the bonds I share with him — and with all of humanity. 

So, when visitors are shaken by visits to Auschwitz or stand on the plains of Kursk, or the shores of Gallipoli, they well may sense that history as more present than past. I have had that experience. The ghosts are in me.

Caryl Chessman was what they used to call “a nasty piece of work.” Born in 1921, by the time he was 20 he had spent time in three reform schools and three prisons, including San Quentin, Chino and Folsom. He stole his first car at 16 and by the time he was caught, for the umpteenth time, in 1947, he had robbed liquor stores, burgled homes, stolen cars and, finally, led police on a bullet-riddled 10 mile high speed car chase through Los Angeles. When he was stopped, he ran from the car into the neighborhood, chased on foot by the cops until he was tripped up and handcuffed. 

Or, as Chessman himself put it, “I am not generally regarded as a pleasant or socially minded fellow.”

He would have been just another petty criminal with a lifetime of serial prison terms but for a series of sensational crimes, which ended in that car chase and capture. 

In late 1947 and early 1948, there were a series of robberies in Los Angeles in which a man in a gray Ford coupe with a red police light would flag down drivers, point a gun in their face and rob them of what cash they had. He became known in the papers as the “Red Light Bandit.” A special police task force was created to track him down. 

On Jan. 19, the red light bandit stopped a car with a couple in it, robbed the man and then dragged the woman out of the car and forced her to perform fellatio on him. Three days later, he stopped a man and the man’s 17-year-old neighbor, robbed him and took the girl into his car, drove several miles, stopped and attempted to rape her. Failing in that, attempted anal rape, and failing in that, forced her to perform oral sex on him. He then drove her to her neighborhood and left her on the side of the road. 

The next day, police spotted the Ford coupe, implicated in the crimes, and gave chase, gunfire and all. And when they caught Chessman (and his accomplice, who was later convicted of several of the robberies he had pulled off with Chessman), the young rape victim identified Chessman as her assailant. Later, the first victim also identified him. Evidence was found in the coupe, including a red ribbon the young girl had lost in the attack. 

A three-day trial ended with Chessman convicted of 17 counts of robbery, rape, attempted rape and kidnapping. Chessman had chosen to act as his own attorney. The judge tried his best to dissuade him, but Chessman persisted, and defended himself incompetently. 

The problem that brought Chessman to national and international attention was that he was convicted, in part, under a 1933 California law, called a “baby Lindbergh law,” which offered the death penalty in cases where kidnapping was accompanied by another crime and led to “bodily harm.” Chessman’s crime ticked all the boxes. No one had been killed, though, and for many, including California’s governor Pat Brown, the death penalty in such cases was too extreme. Brown was a vocal opponent of the death penalty. 

Chessman, protesting his innocence and claiming to know who the “real” red light bandit was, filed appeal on appeal, getting nowhere. Various judges and judicial panels put stays on the execution order and Chessman was on the edge of his death eight times, always getting a last-minute stay of execution. This went on for nearly 12 years, becoming an international cause célèbre, and a rallying point for the anti-death-penalty movement. 

“A cat, I am told, has nine lives,” wrote Chessman. “If that is true, I know how a cat feels.”

He wrote three books while on death row. As he put it in one of them, The Face of Justice, “I won the dubious distinction of having existed longer under death sentence than any other condemned man in the nation’s then 179-year history. Day after day, I would go on breaking my own record.”

Chessman’s case was covered widely in the news, especially in the last year. He was featured on the covers of Time magazine and Germany’s Der Spiegel. Rallies were held worldwide. Folk singers wrote ballads. 

I was in seventh grade at the time, and remember quite well how much Chessman was in the news. It was the first time I was ever fully thoughtful about the death penalty. 

In the years Chessman waited for his date with the gas chamber, the baby Lindbergh law was repealed, although not retroactively. The anti-death penalty movement gained traction, and Gov. Brown found himself unable to commute Chessman’s sentence because California law required that the state supreme court concur to the commutation, which they refused to do. 

And so, on May 2, 1960, Chessman was led to the gas chamber at San Quentin Prison. 

______________________________

“I guess I’ll just have to practice holding my breath.”

— Caryl Chessman

______________________________

It is called “capital punishment,” because “capital” (“of the head,” derived via the Latin capitalis from caput, “head”) refers to execution by beheading, one of the older, and historically quite popular, forms of criminal execution. 

The gas chamber was originally considered an improvement on earlier methods, and thought less cruel than hanging, electrocution, beheading or being shot by a firing squad. Just how “painless” death by cyanide gas is is up for discussion. At best, it is not a pretty sight. 

The death of Caryl Chessman in San Quentin’s gas chamber on May 2, 1960, was described by several reporters. He was brought to the large metal capsule with two chairs, side by side. He was placed in one and straps attached to his arms, legs and torso. A doctor attaches a long stethoscope tube to the condemned’s chest. 

Chessman; San Quentin gas chamber; wax exhibit of Chessman at Mme. Tussaud’s

One reporter wrote, “The doctor of the prison walks up and utters the victim’s full name — you know, like “Richard Allen McVictim” — the full legal name you only hear when you are in trouble.”

“After they strap the soon-deceased into the metal chair, one of the guards, usually at 10:02 a.m. is wont to tell the victim something like, ‘Take a deep breath as soon as you smell the gas — it will make it easier for you (“how the fuck would you know” is what Barbara Graham is legended to have replied) [when she was executed in 1955].”

Continuing from another reporter, “The execution squad left the chamber and quickly closed and sealed the big airtight door. At 10:03 a.m., Warden Dickson nodded to Max Brice, the state executioner, a tall man in a dark business suit, who stood next to him. Brice moved a lever and a dozen egg-shaped Dupont cyanide pellets in a cheesecloth bag were lowered into a vat of sulphuric acid under the death chair. Almost instantly, deadly invisible fumes began to rise in the chamber. Chessman took a deep breath and held it, warding off unconsciousness for as long as possible. But the fumes must have reached him very quickly because witnesses saw his nose twitch, then he expelled the breath he was holding and breathed in. He looked over at Eleanor Black once again and smiled a sad, half-smile just before his head fell forward. Seconds later, foamy saliva began to drool from his open mouth.”

Further on, “In the chamber, Caryl Chessman’s body began to react to the death that was seizing it. He vomited up part of his breakfast; his bladder and bowels emptied inside his clothes. Then his heart stopped beating. At 10:12 the physician listening through the stethoscope advised Warden Dickson that Chessman was dead. Dickson turned to one of the execution squad officers. ‘Start the blowers.’ The officer threw a switch and a fan high above the chamber began to suck out the fumes and the stench.”

Public hanging; Ruth Snyder in the electric chair, 1929; Weegee photo of gas chamber execution

Finally, from another account, “Reporters one has interviewed who have witnessed executions say that there are screams, coughing, hacking, wild facial grimaces and drool. The murdered human loses control over his system, drooling…  The body slumps. After 8 to 10 minutes, the heart stops. The gas is sucked out of the chamber, the puke and defecation is hosed from the metal, the body is hauled away.”

Death by cyanide gas was first introduced in 1924 with the execution of Tong gang murderer Gee Jon in Carson City, Nev. The state supreme court ruled that gas was not “cruel and unusual punishment, but should be considered as “inflicting the death penalty in the most humane manner known to modern science.”

The desire to kill offenders humanely drove justice systems and penologists from the late 1700s onward. The most famous of “humane” methods was proposed by Joseph-Ignace Guillotin to the French National Assembly in 1789 in the form of decapitation “by means of a simple mechanism.” Although that mechanism had been around in various forms for centuries, it took on Guillotin’s name: a tall wooden frame holding a heavy angled blade that would drop from a height and sever head from body more cleanly and with less fuss than the older, more traditional sword or ax decapitations, which were sometimes rather gruesome and could take several whacks to get it right. 

Last public execution by guillotine in France, 1939

The guillotine — aka the “National Razor” — was last used in France for public execution in 1939, and last used at all in 1977 and outlawed in 1981. In use in the 20th century, the process could be quite efficient, as seen in a moment of Gilles Pontecorvo’s 1966 hyper-realistic film, The Battle of Algiers. A prisoner is marched to an inner courtyard of the prison where he is strapped to a vertical board which pivots down under the blade, which falls and severs the head — a process that took all of two or three seconds from start to finish. No ritual, no ceremony. If you are thinking of scenes of Louis XVI or Marie Antoinette on the scaffold making noble speeches, forget it. Pivot, slice, bounce. Quick as that. The headless body is then rolled over onto a gurney and with the head thrown in, wheeled away. 

Each new version of “humane” execution has been followed by another attempt as the previous became recognized as barbaric. And so, in the U.S., the gas chamber and electrocution, followed by lethal injection and most recently, by nitrogen asphyxiation. 

The barbarity of earlier methods is to modern moralities appalling.  Before the Age of Enlightenment and before jails and prisons became widespread in the 19th century, the primary punishments were public humiliation (such as the stocks), fines, torture, and death. These were pretty much the gamut. Death was meted out for even petty crimes. 

In Medieval England, you could be executed for: theft; cattle rustling; blasphemy; sodomy; incest; adultery; fraud; insult to the king; failure to pay taxes; extortion; kidnapping; being Roman Catholic (at one point); being Protestant (at another point) — and the list goes on. It is estimated that some 72,000 people were executed under the reign of Henry VIII alone. 

As late as 1820, in the British era of the so-called “Bloody Laws,” you could be executed for shoplifting, insurance fraud, or cutting down a cherry tree in an orchard. 

And we haven’t even mentioned the sin of being the wrong ethnicity. Millions upon millions have been murdered under official sanction for that, not only in Nazi Germany, but in the Ottoman Empire, in China, or Burma, or Rwanda. Genocide is a new word, but for an ancient practice. 

And historically, the methods could be grizzly. Torture on the wheel, for one, where you would be tied to a wheel, perhaps held up in the air, to die over a period of days from hunger and thirst, or from the broken bones and ruptured organs caused by the process. 

There was the gibbet, where the condemned was hung in a cage in public, again to die of hunger and thirst over days exposed to the elements.

One of the oldest methods was stoning, popular in the Old Testament and in modern fundamentalist Islam. It is a punishment still in use in some Islamic countries. 

 

As is beheading, the official execution method of Saudi Arabia. 

Other methods used in the past include: burning at the stake; boiling in oil or water; crushing by an elephant; fed to the lions or other animals; trampling by horses; being buried alive; crucifixion; disembowelment; dismemberment (as in being drawn and quartered); drowning; being pushed off a cliff; being flayed alive; garrotting; being walled up (immurement); being crushed under weights; impalement; having molten metal poured down the gullet; being tied in sack with wild animals and thrown into a river; poisoning; methodically slicing off body pieces until death; suffocation in ash. 

And there’s always tying the condemned to the mouth of a cannon and firing it, blowing the victim to pieces. And to prevent the spilling of blood, ancient Mongolians would execute prisoners by breaking their backs.  Then, there’s the Brazen Bull, a hollow bronze effigy of a bull wherein the condemned was encased and cooked inside as a fire was burned underneath. The Soviet method was to simply shoot the prisoner quickly in the back of the head, often without warning. And the Viking “Blood Eagle,” in which the condemned was put face down and his ribs cut from his backbone on both sides, spread open and the lungs pulled out and laid out as “wings.” 

The methods seemed to elicit the most sadistic tendencies of the human race. Death by torture was common. 

Western cultures have slowly weaned themselves of capital punishment over the past two centuries, albeit in fits and starts. It is uncommon in the developed world, but still practiced in much of the rest of the world. The U.S. remains largely squirmy at the idea, but, state-by-state, has either outlawed the death penalty or reveled in it (I’m looking at you, Texas). 

The arguments for and against continue to be made. Is it retribution; is it meant to discourage crime; or is it a hygienic process to eliminate unwanted criminal elements from society? With so many convictions being overturned by newer evidence, especially DNA evidence, can it still be justified? 

As Moses Maimonides said in the 12th century, “It is better and more satisfactory to acquit a thousand guilty persons than to put a single innocent one to death.”

San Quentin lethal injection room 

Currently, there are just under 2,500 inmates in the U.S. waiting execution. The average time between sentencing and execution is now 20 years, far exceeding Chessman’s 11 years and 10 months. Forty-two percent are White; 42 percent are Black; 14 percent are Hispanic. Just under 98 percent are male and more than two-thirds have not got even a high-school education. Since 1972, 1.6 percent have been formally exonerated and released. 

There were a lot of pleasures to working for a newspaper before the imposition of austerity that followed corporate buy-outs. The earlier parts of my career in the Features Department with The Arizona Republic in Phoenix, Ariz., came with great joys. 

Before being eaten up by Gannett, The Republic was almost a kind of loony bin of great eccentrics, not all of whom were constitutionally suited to journalism. Those days, it was fun to come to work. When Gannett took over, it imposed greater professionalism in the staff, but the paper lost a good deal of personality. Those who went through those years with me will know who I’m talking about, even without my naming names. But there was a TV writer who tried to build himself a “private sanctum” in the open office space, made out of a wall of bricks of old VHS review tapes. There was a society columnist who refused to double-check the spelling of names in his copy. A movie critic who could write a sentence as long as a city bus without ever using an actual verb. She was also famous for not wearing underwear. 

I could go on. There was the travel writer who once wrote that in Mexico City there had been a politician “assassinated next to the statue commemorating the event.” And a naive advice columnist whose world-view could make a Hallmark card seem cynical. The book editor seemed to hate the world. The history columnist was famous for tall-tales. 

And let’s not forget the copy editor who robbed a bank and tried to escape on a bicycle. 

There were quite a few solid, hardworking reporters. Not everyone was quite so out-there. But let’s just say that there was a tolerance for idiosyncrasy, without which I would never have been hired. 

The newspaper had a private park, called the “Ranch,” where employees could go for picnics and Fourth-of-July fireworks. The managing editor was best known for stopping by your desk on your birthday to offer greetings.  

What can I say? Just a few months before I was hired, the publisher of the paper resigned in disgrace when it was revealed that his fabulous military career as a Korean War pilot (he was often photographed in uniform with his medals) was, in fact, fabulous. It was a fable he made up. 

And so, this was an environment in which I could thrive. And for 25 years, I did, even through corporate de-flavorization and a raft of changing publishers, executive editors, editors-in-chief and various industry hot-shots brought in to spiffy up the joint. I was providentially lucky in always having an excellent editor immediately in charge of me, who nurtured me and helped my copy whenever it needed it. 

(It has been my experience that in almost any institution, the higher in management you climb, the less in touch you are with the actual process of your business. The mid-level people keep things functioning, while upper management keeps coming up with “great ideas” that only bollix things up. Very like the difference between sergeants and colonels.)

The staff I first worked with, with all their wonderful weirdnesses, slowly left the business, replaced with better-trained, but less colorful staffers, still interesting, still unusual by civilian standards, but not certifiable. The paper became better and more professional. And then, it became corporate. When The Republic, and the afternoon Phoenix Gazette, were family-owned by the Pulliams, we heard often of our “responsibility to our readers.” When Gannett bought the paper out, we heard instead of our “responsibility to our shareholders.” Everything changed. 

And this was before the internet killed newspapers everywhere. Now things are much worse. When I first worked for The Republic, there was a staff of more than 500. Now, 10 years after my retirement and decimated by corporate restructuring and vain attempts to figure out digital journalism, the staff is under 150. I retired just in time. 

Looking back, though, I realize that every job I’ve ever had has had its share of oddballs. 

The first job I had, in my senior year at college, was on the groundskeeping team at school. It was full of eccentrics, mostly Quakers fulfilling their alternative service as conscientious objectors during the Vietnam war. One day, Bruce Piephoff and I were trimming the hedges at the front gate and he lit up a joint and offered me one. Traffic streamed in front of us, but he didn’t seem to mind. A few years later, Piephoff robbed a restaurant, grabbing everything he could from the till and then walking up the street throwing the cash at anyone he passed. He seems to have done well since then, now a singer and recording artist. 

Later, I worked at a camera store. My manager was Bill Stanley, who looked rather like Groucho in his You Bet Your Life days. Stanley chewed on a cigar all day, turning it into a spatulate goo. He had an improvisatory relation with the English language. When an obnoxious customer began spouting stupid opinions, Stanley yelled at him, “You talk like a man with a paper asshole.” When someone asked about the big boss, Stanley told her, “He came through here like a breeze out of bats.” Every day there were new words in new orders. 

When I worked at the Black weekly newspaper, the editor was a drunk named Mike Feeney, who had once worked at the New York Times and I would see him daily sitting at his desk surrounded by a dozen half-finished paper cups of coffee, some growing mold, and he would be filling out the Times crossword puzzle, in ink! And he would finish it before ever getting to the “down” clues. He gave me my first lessons as a reporter. “What reporting is,” he said, “is that you call up the widow and you say, ‘My condolences, I’m sorry that your husband has died, but why did you shoot him?’” 

The zoo in Seattle was also full of crazies. There was Bike Lady, Wolf Man, Gorilla Lady. And the kindly old relief keeper, Bill Cowell. One day, the place was full of kids running around screaming, spilling soda pop and popcorn, and Bill leaned over to me, “Don’tcha just wanna run them over?” 

And I finally got to be a teacher, in the art department of a two-year college. The art staff was especially close, and we had dinner together about once a week. There were some great parties. A Thanksgiving with a contest to make sculpture out of food. The winner was an outhouse made from cornbread, with a graham cracker door and a half a hard-boiled egg as a privy seat. I made a roast chicken in the form of Jackie Gleason, with a pear attached as his head. Another time the drawing teacher, Steve Wolf helped us put on a shadow-puppet show. He had us falling on the floor with the most obscene performance he called, “The Ballerina and the Dog.” 

And so, I suppose I have always worked with a class of people outside the normal order. So, when I was hired by the Features editor at The Republic and he was wearing Japanese sandals, it hardly registered with me. Mike McKay gave me my first real job in newspapers. 

 But, oh, how I loved my years there. Newspapers everywhere were profit-rich and the paper was willing to send reporters all over to cover stories. I benefited by getting to travel across the country, and even the world. 

I was primarily an art critic — and ran immediately afoul of the local cowboy artist fans when I reviewed the annual Cowboy Artists of America exhibition and sale at the Phoenix Art Museum. It was one of the major events on the social calendar, when all the Texas oil millionaires would descend on Phoenix to buy up pictures of cowboys and Indians. 

The event was an institution in the city, but I wasn’t having any of it. I wrote a fairly unfriendly review of the art and got instant pushback. I wrote, among other things, “It’s time, Phoenix, to hang up your cap pistols. It’s time to grow up and leave behind these adolescent fantasies.” And, “their work is just, well, maybe a few steps above black velvet Elvis paintings.” I was hanged in effigy by Western Horseman magazine. It was great fun. 

But my portfolio expanded, and by the end of my sojourn in the desert, I was also dance critic, classical music critic and architecture critic — one of the last things I did was complete a 40,000 word history of Phoenix architecture. I also became back-up critic for theater and film. And I wrote hundreds of travel stories. 

The paper sent me to Boston, New York, Chicago, Miami, San Francisco, Reno, and almost once a year, to Los Angeles. I covered major art exhibits by Van Gogh, Cezanne, Audubon, Jackson Pollock, among others. 

Because Frank Lloyd Wright had a Scottsdale connection, I wrote about him often and got to travel to and write about many of his most famous buildings, including Taliesin in Wisconsin and Falling Water in Pennsylvania. 

Pacific Coast Highway

But the best were the travel stories, as when they let me take 10 days to drive up the Pacific Coast Highway from Tijuana to Vancouver, or another time when I also drove from Mexico to Canada, but along the Hundredth Meridian in the center of the continent — and then down the Mississippi River from Lake Itasca in Minnesota to the Gulf of Mexico. Over several different trips, I cobbled together a series of stories about the Appalachian Mountains from Alabama to the Gaspé Peninsula. 

Mississippi River near Cairo, Ill. 

I had assignments that let me cover all the national parks in Utah, and several excursions to every corner of Arizona. In 1988, I went to South Africa for the paper. 

Indian Ocean, Durban, South Africa

Of course, when Gannett took over, the travel miles shrunk to near zero. They didn’t want to pay for anything they didn’t absolutely have to. 

I left in 2012. The handwriting was on the wall. Thoughtful pieces about art and culture were no longer wanted. We were asked to provide “listicles,” such as “Top 5 things to this weekend.” After I left, I heard from former colleagues how the photography staff was let go, the copy editors were fired — how can you run a newspaper with no copy editors? They are the heart of a newspaper. They saved my butt I don’t know how many times. But no, they are all gone. 

It was a sweet spot I was lucky to have landed on, to be able to observe the old “Front Page” days in their waning glory, and leave when everything was drowning in corporatism. I have often said that if Gannett thought they could make more money running parking garages, they would turn The Republic building into one. 

When I left, a group of colleagues bought and gave me a blog site. I’ve been writing on it ever since — now just under 700 entries — and it proves what I have always said, writers never really retire, they just stop getting paid for it.

In the 1920s, the world of animated cartoons was the Wild West. The same group of animators all worked for each other, began their own studios, worked for other studios, went bankrupt and built new studios. There were Bray Productions, Van Beuren Studios, Terry Toons, Barré Studios. All populated by a circulating cast of the pioneers of movie cartoons: Paul Terry, Walter Lantz, John Bray, Amadee Van Beuren, Pat Sullivan, Ub Iwerks, Vernon Stallings, Earl Hurd, Grim Natwick. And, of course, Max and Dave Fleischer, and Walt Disney. 

Among them, they created a whole series of popular cartoon characters that populated the silent era of animation: Bobby Bumps, Colonel Heeza Liar, Felix the Cat, Oswald Rabbit, Ko-Ko the Clown, Milton the Mouse, Krazy Kat, Farmer Al Falfa, Mutt and Jeff, and an early, non-cat-and-mouse version of Tom and Jerry. 

There were popular series, such as the Aesop’s Fable animal cartoons and Disney’s early Alice cartoons. 

I watched tons of these old, silent cartoons as a kid when they were licensed to TV syndication. In those early years, television was desperate for content and these cartoon mice and cats filled up the after-school hours. They were part of the cultural landscape for early Boomers; they are now largely lost to prehistory and archivists. Some may be found, usually in fuzzy, bad prints, on YouTube. 

By the end of the 1920s, two studios stood on top of the pile. On the West Coast, via Missouri, there was Disney; in New York, there were the Fleischer brothers. 

The two studios were poles apart. Disney was bland and inoffensive; the Fleischers were urban, surreal and ballsy. Disney came up with Mickey Mouse, who, while visually identifiable, is about the most innocuous character in the history of animation. What is Mickey’s character? Well? He has none. 

But in New York, the Fleischers created the “Out of the Inkwell” series, combining live action and animation. In most of these short cartoons, the dominant brother, Max, would open a bottle of ink and out would pop his characters, or he would draw them and they would come to life. His chief character was Ko-Ko the Clown (later, just Koko). 

Max Fleischer was an energetic inventor and came up with many of the techniques since common in animation. He held some 200 patents. Fleischer was an artist and could draw fluently. Disney, by contrast was an indifferent draftsman, but, in contrast with the hapless Fleischers, he was a world-class businessman who went on to found an empire of cartoons, films, TV shows and amusement parks. Disney made his cartoons for a researched audience; the Fleischers made their animations for themselves. By the end of World War II, they could no longer compete. 

But what a run they had, beginning with Koko, followed by Bimbo the dog and then, paydirt — Betty Boop was born. She arrived in this world at the same time as talkies. It is impossible to imagine Betty without her voice. 

Disney often claimed to have made the first animated cartoon with sound, in the 1928 short, Steamboat Willie, with Mickey Mouse. But by then, the Fleischers had been running cartoons with sound for four years. They pioneered the “bouncing ball” sing-along cartoons. And for that, Steamboat Willie is hardly a “talkie” — there is no dialog in it, only sound effects. And compared with Disney’s later slickness, it is a surprisingly crude cartoon. The Fleischers were miles ahead of Disney technically. 

Betty first appeared in the 1930 cartoon Dizzy Dishes. Fleischer asked one of his animators, Grim Natwick, to come up with a girlfriend for their established character, Bimbo, and Natwick whipped up a sexy poodle. She has only a bit part in Dizzy Dishes, singing onstage, while the hero, Bimbo, sees her and falls in love. Betty has long doggy ears and a dark, wet doggy nose. 

Later, Betty evolved into a Jazz-Age flapper with human ears and nose, and a sexy bareback dress, short enough to show off her garter. 

Betty got her own series of cartoons, and from 1930 to 1939, she starred in 90 releases. Her career spanned two eras in early film history — the pre-Code days until 1934, and then the clamp-down by the Catholic Legion of Decency and the Hollywood Hays Code, which put quite a crimp in Betty’s style. 

After her first 30 films, the Code kicked in. In her final pre-Code short, Betty Boop’s Trial (June, 1934), Betty turns her back on the camera, flips up her tiny skirt to show off her panties and bottom. 

It’s hard now to recognize just how shocking and adult the Betty Boop cartoons could be. The Fleischer cartoons were made for grown-up audiences, not just kids. There were sex, violence, gritty urban scenes and even a recognition of the Great Depression. She was, after all, a Jazz-Age independent woman. 

And despite some unfortunate blackface scenes, Fleischer films were surprisingly integrated. African-American musicians Louis Armstrong and Cab Calloway star in several Boop-toons. I don’t want to make too much of this, there are still some horrid African cannibal stereotypes  and some blackface in several of the cartoons. But the Fleischers seem to have been fairly progressive for their time. Some of their cartoons were refused showing in the South. 

And Betty was a Jazz Age Modern Woman — at least before 1934. Racy and risque, she dances a topless hula in Betty Boop’s Bamboo Isle (1932); prances before the fires of hell in a see-through nightie in Red Hot Mamma (1934); has her dress lifted up to show her underwear in Chess-Nuts (1932). 

Betty’s virtue is frequently besieged and there is little subtlety over what her signature phrase means in 1932’s Boop-Oop-A-Doop. Betty is a circus performer and the giant beast of a ringmaster lusts after her. In a creepy Harvey Weinstein move, the ringmaster paws all over Betty and implies that if she doesn’t do what he wants, she will lose her job. Betty sings “Don’t take my boop-oop-a-doop away.” 

By the end, she is saved by Koko the clown and her virginity is safe. “No! He couldn’t take my boop-oop-a-doop away!” 

 In the 1933 Betty Boop cartoon, Popeye the Sailor, she only appears briefly, doing the topless hootchie-coochie dance she did in Bamboo Isle, joined by Popeye onstage, also wearing a grass skirt. The short introduced Popeye in his first appearance in an animated short. 

As Betty became tamed and dowdy after 1934, her popularity waned just as Popeye’s grew, eventually overtaking her as the Fleischer’s prime property. 

By the end, in 1939, Betty had turned into a swing-music figure. Her head-to-body ratio had subsided. Her skirts lengthened and her neckline rose. Then she disappeared.

The Fleischers continued making Popeye cartoons, lost their company to Paramount Pictures, who then fired the brothers. Their last project with Paramount was the stylish Art-Deco inspired Superman cartoons. 

Betty had a resurgence, not as a cartoon star, but as a pop-culture star and even a feminist icon, in the 1980s, with a merchandising boom. She then appealed to a generation that may not even have known she had been an animated cartoon star. 

__________________________

“Don’t take my boop-oop-a-doop away!”

— “Boop-Oop-A-Doop,” 1932

__________________________

When Betty Boop was introduced in 1930, Fleischer animator Grim Natwick based her look and sound on popular singer Helen Kane, who had a baby-voice and scatted as she sang. In 1932, Kane sued the Fleischers over their use of her image. It came to trial in 1934. 

“Your honor,” said Kane’s lawyer to Justice Edward J. McGoldrick, “we contend this character [Betty Boop] has Miss Kane’s personality her mannerisms her plumpness, her curls, her eyes, and that she sings the songs Miss Kane made famous.” A style Kane called the “baby vamp.” 

Kane (1904-1966) was born in New York and by age 15 was performing professionally. She achieved popularity in the 1920s on stage in vaudeville and on Broadway. She recorded 22 songs and made seven films in Hollywood. Her trademark was a squeaky baby voice and scat singing. But her style of singing was going out of fashion by the early 1930s, although she continued to find work onstage. When Betty Boop came out, she sued the Fleischers for $250,000 (equivalent to about $5 million today). The trial lasted two weeks and filled the newspapers with juicy stories. 

Betty Boop was described in the court case as “combin[ing] in appearance the childish with the sophisticated — a large round baby face with big eyes and a nose like a button, framed in a somewhat careful coiffure, with a very small body of which perhaps the leading characteristic is the most self-confident little bust imaginable.”

Kane’s lawyers made a tactical mistake by basing their claim on the fact that she uses scat syllables in her singing, including the famous “Boop-oop-a-doop.” Kane claimed to have invented the practice of singing nonsense words to music, a claim too easily disproven. 

The Fleischers’ lawyer demonstrated that Kane had seen a juvenile Black performer, Baby Esther Jones, who shared a manager with Kane, and that Baby Esther (also sometimes called L’il Esther) used the same scat singing and baby voice before her. 

Kane caricature and Sweet Betty

Trial transcripts can be quite a kick in the pants to judicial dignity. Asked what Baby Esther did on stage, the manager said, “She sang the chorus and during her choruses, we had four bars omitted, which we called the break, of so-called ‘hot licks.’” Question: “During those breaks or ‘hot licks,’ what did Baby Esther do?” Answer: “A funny expression in her face and a meaningless sound.” Q: Will you tell us what those sounds were?” A: “At various times they differed, they sound like ‘boop-boop-a-doop,’ and sometimes…” Kane’s lawyer then objects. So Fleischer’s lawyer rephrases:

“Give us as nearly as you can how they sounded?” A: “I could do it better if I had rhythm with it.” Q: “Give us the sounds.” A: “Boo-did-do-doo.” Q: “Where there other sounds besides the one that you have just mentioned?” A: “Yes, quite a few.” Q: “Will you give us as many as you can remember?” A: “Whad-da-da-da” Q: “Others.” A: “There are quite a few — ‘Lo-di-de-do’” Q: Any others that you recall?” A: “Sounds like a time she would make a sound like sort of a moaning sound, finished off with ‘de-do’” 

According to one newspaper account, “That’s when the court stenographer threw up the sponge and admitted he couldn’t spell such things.” 

At one point, film of Betty Boop and film of Helen Kane were shown, without sound, to compare their styles. 

Another newspaper account reported “Except for the occasional throat-clearings of a roomful of attorneys, it was strictly a silent performance, the court having ruled agains any audible ‘booping.’

“Miss Kane’s attorneys strove vainly to have the sound tracks included, saying they wished to show how Betty Boop has ‘simulated our voice and our style of singing,’ but Justice McGoldrick ruled that any ‘booping’ would be incompetent, immaterial and irrelevant.”

Kane with three women who voiced Betty Boop in the cartoons

The fact that Betty Boop was clearly based on Kane (later readily admitted by Boop designer Grim Natwick) hardly mattered. The judge ruled that the sound of a voice cannot be copyrighted, and that the nonsense syllable singing was quite common beyond its use by Kane and found for the Fleischers. 

Three of the women who voiced Betty Boop in the Fleischer cartoons had all been participants earlier in a Helen Kane imitation contest. They had all performed, outside the Boop-toons as Kane knock-offs. 

The knowledge that Betty Boop imitated the White Kane who imitated the Black Baby Esther has recently raised the specter of “cultural appropriation” — a concept that has now become something of an ethical fad. I know of few things sillier or more pointless. 

After all, Baby Esther was known as a “Little Florence Mills,” imitating that singer. The actual Mills took over on Broadway from Gertrude Saunders in the 1921 hit, Shuffle Along. Each performer borrowing from the previous. 

In the Fleischer trial, the famous pianist and composer, Clarence Williams testified that he’d been using the scat technique since 1915. Very few things have virgin birth; most things are developments of other things. We could make the claim that scat began with Stephen Foster and Camptown Races and its “doo-dah, doo-dah” or trace it back to the 16th century and Josquin de Prez’s frotolla El Grillo, where the singer imitates the sound of a cricket. 

As the Fleischer trial judge ruled, “the vocables ‘boop-boop-a-doop’ and similar sounds had been used by other performers prior to the plaintiff.” 

The whole issue of cultural appropriation is bothersome, to say the least. I am not talking here about cases such as when an Anglo artist sells his work as Indian art, pretending to be Native American. That isn’t cultural appropriation, it is simply fraud. But when an artist finds something from another culture that piques his interest and creativity, well, that’s just normal. Everybody is always borrowing from everybody else. It is how culture moves forward. 

Is spaghetti cultural appropriation because the tomato came from indigenous cultures in the New World? 

Mexican Japanese and Filipino spaghetti

How about when spaghetti makes the journey back to the Americas and becomes Mexican chipotle spaghetti. Or further travels to Japan with added daikon, or to the Philippines, where they add hot dogs (pace Sheldon Cooper)? 

In the Columbian Exchange, the New World gave the Old not only tomatoes, but corn (maiz), cacao, vanilla, potatoes and tobacco. 

The Old gave back to the New grapes, onions, apples, wheat, to say nothing of swine, cattle, horses and honey bees. 

So, today, there is nothing more typically Navajo than satellite dishes and pickup trucks. 

It’s all a great mix, and to forbid such churn is to stall human progress. Culture is never static but always on the move. “Traditional” is always a museum-piece. 

I’m not making a case here for blackface minstrelsy — such things are rightfully seen as appalling. But should that mean that Vanilla Ice should not rap? Or that Jessye Norman shouldn’t sing opera? That we should all be stuck in our particular silos and never learn from others? 

George Harrison should never have learned the sitar? That Sergio Leone should have left the plot of A Fistful of Dollars to Akira Kurosawa? 

That Cubism should be trashed because Picasso became fascinated by African masks? 

This isn’t to say there aren’t egregious examples, but they mostly concern stereotyping —  which is to say, ignorance, a failure to see what is actually going on in the other culture. finding something good and useful in another culture and adapting it is rather different, even if you take the original completely out of context. 

Cross-fertilization is not only one of the pleasures of culture, but one of its essentials. Culture is a group enterprise, not an individual one, and it lives through free exchange. 

The current blather about cultural appropriation reminds me, more than anything else, of the Victorian fear of the body and sex, like calling legs “limbs.” It is blue-stocking puritanism and to hell with it. 

Click on any image to enlarge

No photographer has had a higher profile in mass culture than Ansel Adams. He was the popular idea of the photographer as artist, and, I’m sure, the only one to have his images printed on beer cans with his name attached. 

His pictures graced not only Coors beer, but books, posters, calendars, aprons, hats and coffee mugs. He was the subject of a Playboy interview, and had his face on the cover of Time magazine.

 He had a mountain was named for him in California’s Sierra Nevada. That honor came to him less for his photographs and more for his constant advocacy for nature and the environment. 

His earliest photographs were made when Adams was still a teenager with a love for back-country hiking in Yosemite National Park, made with a snapshot camera and drugstore prints. Even those early images show a flair for the dramatic and the careful placement of darks and lights to make a balanced photograph.

Ansel Easton Adams was born in 1902 to a well-off family from San Francisco. As a child, he broke his nose when the 1906 San Francisco earthquake threw him against a garden wall. That bent nose became a trademark of sorts: It leaned left, and the man did, too. He joined the Sierra Club at 17 and was a board member from 1934 on. In later life, he railed against the environmental policies of Ronald Reagan.

His family vacationed in Yosemite Valley; he met his wife there and they ran a visitor center and gift shop, now called the Ansel Adams Gallery.

Early in life, he had planned to be a concert pianist, but eventually gave up keyboard for lens. But his ambition was still artistic: He wanted to be more than a recorder of vacation memories. This at a moment in art history when a number of like-minded photographers were arguing for photography as art when museums, galleries and collectors believed photography was a merely mechanical reproduction system. 

You can see that aesthetic vision in Adams’ early art prints, in platinum or other early processes, slightly fuzzy, with the popular Impressionistic love of sunlight and shadow. 

But in the 1930s, he converted to a Modernist vision of photography, with sharply focused images printed on glossy paper. His friends included other leading photographers, including Alfred Stieglitz and Edward Weston, all of whom were proving that a photographic print had earned a place on the gallery wall.

But while these other artists worked in many genres, in the 1940s, Adams turned ever more to the kind of Great American Landscape we know him for: the images of national parks and American wilderness. Publishing books of his photographs has become an industry.

When he ventured beyond his strength, sometimes the results were stiff and uncomfortable, like his portraits, which made their subjects as granitic as the cliffs of Yosemite. The lighting is perfect, the focus is sharp, the detail is precise, and yet, they are completely lifeless. His presidential portrait of Jimmy Carter may be the worst presidential portrait ever. 

On the other hand, when his purpose was to document the injustice to interned Japanese citizens at the Manzanar camp, his people could be warm and human. 

And so, it is the landscapes we remember, and they have become iconic. 

His 1942 image, Moonrise over Hernandez, N.M., sold at Sotheby’s in 2006 for $685,500. 

____________________________________

“You don’t take a photograph, you make it.”

– Ansel Adams

____________________________________

For its first century and a half, photography meant loading light-sensitive film into your camera, calculating focus, f/stop and shutter speed, making an exposure, processing the film in a series of chemical baths to make a negative and then re-exposing that negative onto light-sensitive paper and running it through a series of chemical baths to create a positive image of the subject. It was an intensely physical process, as anyone who remembers the smell of sodium thiosulphate on their fingers will know. 

Now, it means holding up your smart phone and clicking an image and then swiping left or right to go through the results, and maybe sending it out via Instagram or Twitter so others can share it. And the image exists only in virtual form on a screen of pixels, never becoming anything physical — or requiring any specialist knowledge. 

Not that there’s anything wrong with that. But it does mean that the subject of a photo has been separated from the object of the photo itself. For most people, looking at their family snapshots, it has never been otherwise, but for professional photographers and those making photos ostensibly as art, the physicality of photographs and their making is central. 

Before digital, a photograph was two things: The image and the substrate on which the image appears. Most of us, looking at the snapshots of our families, see the people in the image, but pay little attention to the paper or the layer of silver that makes up the image. But in photography looked at as art, a good deal of attention is paid to the process and technique. In fact, often so much care is paid to the technique that the subject can become ancillary. Who cares if it’s a still life or a portrait, if the gum bichromate print is gorgeous. The subject was just an excuse for the virtuosity of the technique. 

I remember, in the 1970s, long before digital photography, when the technique was actually fetishized: If you didn’t process archivally and make your mattes of acid-free board, you couldn’t be taken seriously as a photographer. It gave rise to a certain preciosity. 

That was for black-and-white. Color photography hardly counted. It wasn’t accepted, for the most part, because of the impermanence of the image (you’ve all seen old snapshots turned funny colors with age). The only color permitted was the dye-transfer print — an expensive and cumbersome process. In the 1920s, museums were unwilling to collect any photography because, they reasoned, it wasn’t really art; it was mechanical. Before the 1970s, few museums collected color photography. Black and white was for the serious artist. All this has changed. 

The middle years of the 20th century — roughly from World War I till the advent of Pop Art in the 1970s, give or take — were ruled by Modernism, which proclaimed that the medium was the message, that the paint mattered more than the image. Abstract paintings — with no subject matter at all — was king. When someone was confused by the jumble of scribble in one of Jackson Pollock’s works, he naively asked the artist what it was he was supposed to see on the canvass. Pollock answered curtly: “A painting.” 

From the Renaissance to the middle of the 19th century, art was expected to picture reality. Looking at a picture frame mimicked looking through a window. Yes, there might be unreal things seen there: saints and angels. But portraits and landscapes were conventionally realistic, at least until the Impressionist revolution in the 1860s — and the invention and popularization of photography. 

When French painter Paul Delaroche saw his first daguerreotype, he famously proclaimed, “From today, painting is dead!” Of course painting didn’t roll over and expire; it went on to do other, newer things, and gave up the obligation to render visual reality the way a camera can. Because, although it wasn’t historically seen as such — at least by the masses — painting already was something different from simply an image of the world; it was a thing — an object, an artifact, a physical presence made of pigment and canvas. 

With the Impressionists, and later and more thoroughly with abstract painting, the thingness was the point. And when a few amateur photographers thought to elevate their camera imagery to the level of “art,” they at first imitated paintings, and especially Impressionist paintings. A whole movement of artist-photographers geared up with something they called Pictorialism — fuzzy imitations of fuzzy paintings. 

Then, in the 1920s, roughly, a group of exceptional photographers decided that photographs should not imitate paintings, but should look like photographs, and that photography had its own qualities and virtues. When American photographer Edward Weston was about to publish his first book of images, his publisher wanted to title it “Edward Weston: Artist,” but Weston objected and changed it to Edward Weston: Photographer. He was proud of his status as just that. 

In Europe, Modernist photography tended to be more political, but in the U.S., it became more interested in examining the physicality of the the visual world, which meant above all, landscape. The American tradition in painting had long featured landscape, and now, photographers thought they could make landscapes photographic rather than painterly. (They also produced a great number of exceptional portraits, and still lifes, but it is landscape that I’m concerned with here). And the landscapes they chose tended to be either industrial and urban, or the natural unpopulated sections of the American West. 

But while Edward Weston, Alfred Stieglitz, Paul Strand, Charles Scheeler, Edward Steichen — and Ansel Adams — were well aware of their prints being art objects, framed and hanging on gallery walls, the wider public, with their brownie cameras had a less sophisticated understanding of the medium: For them, the camera captured their reality, preserved their memories and became souvenirs of the past. For them, the photograph froze reality for them and held it still. 

Even today, there are many people who believe photographs pin down the visual truth of their world, not being aware of how a lens can distort things, what different types of film — or now, different microchips — can alter the final image. Lighting, focal length, depth-of-field, contrast, color temperature and a hundred different technical aspects of photography can govern the final image. For a professional photographer, all of these things are brought to bear on the final created image. For ordinary people a camera simply registers what they saw, or at least the part of what they saw that was important to them (not seeing, for instance, the tree in the background visually growing out of someone’s head). 

The person who most attempted to regularize the variables of photography was Ansel Adams. He wrote a series of five books (later recast as three) teaching the finer points of making photographs — how the lighting, focal length, depth of field, contrast, etc. affected the final picture. 

He perfected what he called the “Zone System” of exposure and processing to control the contrast and dynamic range of the final photographic print. Simplified, the problem faced was that black ink on white paper has a limited range: The white, under normal lighting conditions, is usually no more than 30 or, at best, 40 times brighter than the black. But when you look at the sunlit scene you want to photograph, the brightest part may be a thousand times brighter than the shadow. How do you squeeze all that into your 30:1 ratio? 

Most photographers and snapshooters just pick what they want to show up best and let the shadows go to solid black, or the highlights to bleach out in detailless white. Adams, instead, attempted to divide a scene into 10 (or 11, depending how you count) “zones” of brightness, from solid black to solid white, and then control your camera negative’s exposure to match your previsualized zones, knowing that you can alter the contrast in the developing process, to increase or decrease contrast to fit, Procrustes-style, the whole into the available printing surface. 

(A simplified version of this is the old photographers’ dictum: “Expose for the shadows, develop for the highlights.” Adams’ version is more precise.)

For the ordinary amateur, you point a camera and click the shutter to capture the image and you satisfy yourself with what you get. Adams and his fellow artists are hyper-aware of the end product. Adams preached what he called “previsualization,” in which you attempted to imagine what the final print should look like before you ever pressed the shutter button. The scene being photographed is just raw material for the final presentation.

 “In my mind’s eye, I visualize how a particular … sight and feeling will appear on a print. If it excites me, there is a good chance it will make a good photograph. It is an intuitive sense, an ability that comes from a lot of practice,” Adams said.

The result is a photographic negative, used to make the final print. 

“The negative is comparable to the composer’s score and the print to its performance. Each performance differs in subtle ways.” Anyone who has followed Adams’ career knows that an earlier print may differ considerably from a later one, just as a young pianist’s performance may mellow and change as the pianist ages. In other words, there is not a “single” true print, but, like a musical performance, a range of them. 

The belief in the veracity of photographs is persistent, even in the face of computer-generated imagery, digital manipulation and fakery. Indeed, that faith has often caused trouble for, say, photojournalists, when a literal-minded editor insists that a photo be printed “unmanipulated.” I have known a photostaff that was forbidden even to alter the contrast of a digital photo in the credulous belief that the image first recorded in the camera is more “truthful” than the finished one. (That dictum didn’t last; it couldn’t). The digital file created in a digital camera is like the negative in silver-image photography and is only a first step in the process. To disallow the photographer to finish the process in some mistaken belief that the unmanipulated version is “truer,” is hooey. 

Certainly a photographer in bad faith can use the editing process to distort the end result, but this was true in silver-image photography as well. Digital may make it easier, but no more possible. You depend on the integrity of the photojournalist not to lie, at least not on purpose. 

As for art photography, since the final product is what is sought rather than a record of something else, there can be no lying, just as there is no lying in fiction. You want a journalist to be truthful, but a novelist is allowed to make it all up. 

In the end, you wind up with an artifact, a thing in itself — a photographic print, a range of black and white, or of colors, making a flat version of a three-dimensional world. The unconsidered understanding of a photograph is that it “captures reality,” but a more sophisticated view is that there are conventional distortions we choose to ignore (a photograph doesn’t move, reality does; a photograph is flat, reality is rounded; a photograph doesn’t make sound, reality won’t keep quiet; a person in a photograph is two inches tall, in reality is six feet — and so on, all mere conventions). 

And so, the artist accepts what he has made as a physical object on its own, with its own expectations and reality. Adams may make images of the Tetons or Yosemite, but, in his best work, it is the print itself that engenders awe. 

Click on any image to enlarge 

I was in bed, having trouble getting to sleep, and so making mental lists instead of counting sheep. I made a list of the CDs I would keep, if allowed only one per composer, then if allowed 1 boxed set for each of the dozen major composers, then… well, it went on and I still couldn’t fall asleep. I had made probably half a dozen lists when I began a list of the most beautiful human-made things, one visual, one musical, one verbal, etc. 

Filling in the list was surprisingly easy, considering how many nominees should be considered, but I had no trouble finding single primary answers, which surprised me. 

I’ve written numerous times that the single most beautiful thing I’ve seen, visually, is the north rose window at Chartres Cathedral. I’ve been there four times (five if you count multiple visits to the cathedral over a two-day visit to the town), and I never fail to fall spellbound by that tumbling wheel of light. Its beauty is not found in how pretty the colors are, but in something transcendent — the intent of the Gothic idea of architecture, that if God is light, then a building that celebrates light celebrates God. Even as a non-believer, I can appreciate that glimpse of eternity. The north window is singular in its design, with its set of 12 diamonds turning over and over as they circle the center, giving an illusion of motion — as of angels dancing around divinity. 

I love all the rose windows I’ve seen, but the north rose of Chartres is the dance of the cosmos. 

And if I had only one piece of music to listen to, it would be Der Abschied, the final half-hour song that finishes Gustav Mahler’s Das Lied von der Erde. Every time I listen to it, I dissolve into a puddle of helpless emotion, filled to the brim with the sense of eternity and the world. I have heard countless versions of Der Abschied — I own more than a dozen recordings — and I have my favorites, but even the least of them leaves me wrung dry. 

Das Lied von der Erde is a set of six songs, supposedly translated from Chinese into German and published, among other poems, in 1907 in a book titled Die chinesische Flöte (“The Chinese Flute”), by Hans Bethge. Mahler set his selection of six to orchestral music so rich as to be fattening. The final song, as long as the first five together, tells of the departure of a friend. The poet confronts the beauty of nature around him as he waits for the friend so they may make their farewells. Each stanza is alternated with long orchestral interludes of refined delicacy. 

The music ends — if it can be said to end at all — with lines Mahler wrote himself, perhaps sensing his own imminent death: “Die liebe Erde allüberall/ Blüht auf im Lenz und grünt aufs neu!/ Allüberall und ewig blauen licht die Fernen!” And then, repeated and repeated, ever more quietly and hesitantly, “Ewig … Ewig … … Ewig” — “Forever … Forever … … Forever” — until at last you can barely hear the word, and the music dies.

“The dear earth everywhere/ blooms in spring and grows green anew!/ Everywhere and forever blue lightens the horizon!” and “Ewig … Ewig…” 

These choices came to me almost instantly, without having to think. There are other obvious choices that could be made. Other works of art that are profoundly beautiful, other music nearly as affecting. I have stood rapt in front of the Mary Queen of Heaven at the National Gallery in Washington DC, and been knocked silent by the pears and apples of Cezanne. And nearly as gut-slamming as Der Abschied is Richard Strauss’ Im Abendrot, the final of his Four Last Songs. Or a dozen other paintings and musics. 

As I lay there in the dark, unable to sleep, I rifled through my brain trying to remember a poem that moves me the same way, or any piece of literature: words that leave me drained each time. I went through all the major English poets — and there is plenty of poetry that moves me deeply — and even poems in translation. But the one poem that came back and slapped me upside the head isn’t by Yeats or Wordsworth, but by Carole Steele, my late wife. It is the first poem in her book, 42 Poems.

Carole was the genuine article. And that poem brings me to tears every time. Certainly part of my response comes from the 35 years we spent together, and the overwhelming sense of loss at her death five years ago. But I had the same response when she was alive: This is a poem that makes the connection between the inner and outer worlds; it responds to the physicality of the world in words that startle in their aptness, and combines the directness of childhood with a slant acknowledgement of death, and the awareness that others share in the knowledge of beauty. It isn’t the particular example that counts, but the shared awareness of its existence. 

We may all have different ideas of beauty, and you can each make your own list, but what must be common in all of them is the engagement. Beauty does not work as some passive prettiness outside the psyche. Pretty is not Beauty. Pretty is what is conventional. Beauty is the result of engagement and the creation of meaning. It is an awareness between you and the cosmos, each of the other. It is the recognition, sometimes startling in its suddenness, of the wholeness of it all, of its permanence and its evanescence. 

I have thought for more than 70 years about this. The world is many things, and it offers a share of misery, pain and loss, there is war and death, but it also affords moments of epiphany, the breakthrough of beauty, like the red glow in the black ashy cracks of a dying fire. 

This can easily devolve into “Raindrops on roses and whiskers on kittens,” but I mean something more difficult. Yes, I resonate to warm spring rain and the crisp, dry, cold and sunny October afternoon. These things are beautiful and they can fill up our emotions to bursting, but only if we actually pay attention. Just a plain rainy day spent polishing the silverware, or spending a fall Sunday watching football on TV don’t elicit the response. Paying attention does. 

And when the beauty hits, it is not something external or “out there,” and neither is it something merely subjective or “internal,” but rather it is the identification of them together as a single entity. My awareness of the spring rain brings the rain into my psyche, and my awareness also give the rain its actuality. It makes it real. Yes, the tree falling in the forest makes a sound, but it doesn’t have meaning unless it is heard. The spring rain may fall whether or not anyone notices, but its existence has meaning only when my awareness and its existence become a single thing. 

It has been said that human consciousness is the universe’s means of self-awareness, that our senses are the mirror for the cosmos. It is what Andrew Marvell meant in his poem, The Garden: “The mind, that ocean where each kind/ Does straight its own resemblance find,/ Yet it creates, transcending these,/ Far other worlds, and other seas…”

Beauty is the amour de soi of the cosmos. Our sense of beauty, in the physical world or in art, its mask and mimic, is our sense of identity with the cosmos. “I am he as you are he as you are me/ And we are all together.” This sense is lost when we act like crabs in a bucket, each out for himself and not recognizing our shared humanity, but also when we fail to recognize ourselves as the conscious portion of the universe. Beauty is the breakthrough. 

What we consider pretty is merely a matter of taste, but beauty is a breaking up of our singularity and an identification, however brief, with totality. 

Surely the most famous piece of Japanese art is The Great Wave off Kanagawa, by Katsushira Hokusai. It has become iconic enough that, like Van Gogh’s Starry Night or Michelangelo’s Creation of Adam, it can be recognized by even those who know nothing about art and can be fodder for countless parodies. 

The Great Wave is part of a book, published in Japan in 1830 called 36 Views of Mount Fuji, one of the most famous books of the Ukiyo-e woodblock print style of popular art in Japan from the 17th century until World War I. They were cheap: A single Ukiyo-e image could be bought for roughly the price of a bowl of noodles.

Hokusai (1760-1849) was about 70 years old when he published 36 Views and had been working as an artist since the age of 14. In his 88-year lifetime, he drew, painted and carved something like 30,000 works. Three years after publishing 36 Views, he began signing his work as “Old Man Mad With Painting.” 

As early as 1805, he had begun making pictures with the theme of fishermen in a boat fighting great waves.

And even after the Great Wave, he kept working the theme, including in his black-and-white sketchbook 100 Views of Mount Fuji.

He seems to have begun making the images for 36 Views somewhere about 1826. And it was in the years after that a new European pigment, Prussian blue, made its way to Japan. And so, some of the 36 Views are in the old style and some with the new blue pigment, which Hokusai seemed to enjoy experimenting with. The Great Wave would not have been possible without Prussian blue. An advertisement for the book emphasized the new color. 

But it is important to pay attention to the other images in the series. It is called 36 Views, but Hokusai couldn’t stop and later added an additional 10 images, bringing the total up to 46. They were published and republished multiple times during Hokusai’s life, and each new printing differs slightly from the first, sometimes with different colors, sometimes with new details carved into the woodblock. 

There is no set order for the images, but the ones below are in one of the published sequences. I wanted to post them all for two reasons. First because they give context to the Great Wave, but also because the whole set is great and should be known to anyone who loves art. I have loved them since I first encountered them more than 50 years ago. 

1. Nihonbashi Bridge in Edo

2. The Mitsui Store in Suruga District

3. Suruga Hill, Sundai, in Edo

4. The Hongan-ji Temple in Asakusa

5. The Timber Yard at Honjo

6. Under Mannen Bridge in Fukagawa

7. The Sazai Hall of the Temple of the Five Hundred Arhats

8. The Round-Cushion Pine in Aoyama

9. The Waterwheel at Onden

10. Lower Meguro

11. Snowy Morning in Koishikawa

12. Sunset View of Ryogoku Bridge from Oumaya

13. The Village of Sekiya on the Sumida River

14. Senju in the Musashino Province

15. Distant  View of Fuji from the Gay Quarters in Senju

16. Tsukuda Island in Musashino Province

17. The Kazusa Sea Route

18. The Bay at Nobuto

19. Ushibori in Hitachi Province

20. Fuji from Goten-yama in Shinagawa on the Tokaido

21. Great Wave off the Coast of Kanagawa

22. The Tama River in Musashino Province

23. Hodogaya on the Tokaido

24. The Beach of Seven-League in Sagami Province

25. Enoshima in Sagami Province

26. Nakahara in Sagami Province

27. To the Left of Umezawa in Sagami Province

28. The Lake at Hakone in Sagami Province

29. Mishima Pass in Kai Province

30. Fuji from a Tea Field in Katakura, Surugama Province

31. Ono Shindon in the Suraga Province

32. Fuji in a Storm

33. The Red Fuji (Fine Wind, Clear Morning)

34. People Climbing the Mountain

35. Ejiri in Suruga Province

36. The Coast of Tago Bay near Ejiri on the Tokaido

37. Fuji from Kanaya on the Tokaido

38. In the Mountains of Totomi Province

39. Yoshida on the Tokaido

40. Fujimigahara in Owari Province

41. Inume Pass in Kai Province

42. Fuji Reflected on Lake Kawaguchi at Misaka in Kai Province

43. Dawn at Isawa in Kai Province

44. Lake Suwa in Shinano Province

45. Kajikazawa in Kai Province

46. The Back of Fuji from Minobu River

Click on any image to enlarge

Buddhism has its Noble Eightfold Path, and I have my list of Seven Noble Violin Concertos.

There are two basic varieties of concerto in the Western tradition. In one, the purpose is to be pleasing, either through beautiful and graceful melody or by entertaining the audience with the soloist’s virtuosity. 

But the other path — what I’m calling the “noble concerto” is more symphonic in conception, where an estimable composer uses the concerto form to express some deep or profound feelings and the solo instrument is just a means to do so. 

This is not to disparage the first type of concerto. Two of the greatest and most popular violin concertos fall into this group: the Mendelssohn concerto (certainly one of the most beautiful ever and perhaps the only one that could be called “perfect.”) and the Tchaikovsky, which, although it is difficult for the performer, cannot be said to plumb the emotional depths. Doesn’t mean it isn’t a great concerto, but its emotional qualities tend to be melodramatic rather than profound. 

The concertos of Paganini are tuneful, also, but mainly exist to show off his digital gymnastics. The concertos of Vieuxtemps, Viotti, and Wieniawski are all adequate but shallow works. Don’t get me started on Ludwig Spohr. Even Mozart’s concertos for violin are more pleasing than profound. It’s all they were ever meant to be, and we shouldn’t ask for them to be more. 

Some of these concertos are among my favorites. Beyond the Mendelssohn and Tchaikovsky, I adore the Korngold, the Barber, both Prokofievs and the Stravinsky. I even love both Philip Glass goes at the genre (has he written a third while I wasn’t looking?) I listen to all of them over and over, with great pleasure and satisfaction. So I am not writing them off simply because they don’t make my list of noble concertos. 

The noble concerto doesn’t seek to ingratiate itself. It is not written with the audience in mind, but rather to express the thoughts and emotions common to humanity. They bear a seriousness of purpose. They may seem more austere, less immediately appealing, but in the long run, they reward a lifetime of listening, and in multiple interpretations. You learn about yourself by listening to them. 

I am not including concertos earlier than Beethoven, which means, no Bach, no Vivaldi, Tartini, Locatelli or Corelli. In their day, “noble” simply meant a spot in the social hierarchy, a position of privilege unearned but born into. Beethoven changed that, claiming a place for an earned nobility of purpose and ability.

“Prince,” he told his patron, Prince Lichnowsky, “what you are, you are through chance and birth; what I am, I am through my own labor. There are many princes and there will continue to be thousands more, but there is only one Beethoven.” Which would sound like boasting, if he didn’t have the walk to back up the talk. 

Nobility of the kind Beethoven meant was defined in Samuel Johnson’s dictionary as, “a scorn of meanness,” or low intention and an embrace of moral and ethical excellence and personal integrity. 

I call these seven concertos “noble,” but that is a word well out of fashion these days, when anything elevated, whether nobility or heroism or honor, is suspect. The facile use of such words by fascists and totalitarians  have made them stink to the mind. Yet, the truths of them are still there, and can be found in words, actions, art and literature. And in these seven concertos. 

It’s not that I want to listen to these seven to the exclusion of the others. They each have their place, their purpose and their virtues. But these seven are just more, what — serious. They make more demands on the listener, and provide greater rewards for the effort. A seriousness of purpose. 

—Let’s take them in historical order, beginning with the obvious first choice, the Beethoven Violin Concerto in D, op. 61, from 1806. 

At one time, I owned more than 40 recordings of the Beethoven concerto, which I listened to and studied with the score, and so, I am quite familiar with most of these CDs. I got rid of almost all of them when I moved from Arizona to North Carolina, along with three-quarters of all my classical music collection (now, I’m reduced to little more than 2500 CDs. Weep for me.) 

There have been more than a hundred notable recordings of the Beethoven concerto, from the time of acoustic recording to our streaming present. 

I  count five distinct ages of recorded music. The first from the era of the 78 rpm record, where concertos and symphonies had to be spread out onto many discs, with odd breaks in between. This was an age of giants, of Fritz Kreisler, Mischa Elman, of Bronislaw Huberman, Albert Sammons, Efrem Zimbalist, Adolf Busch, and, crossing over eras, Jascha Heifetz. 

The second era was that of the LP, both mono and stereo. This was the golden age of classical music recordings, where established stars of an earlier age got to show off their stuff in hi-fidelity, and newer star performers made their names. 

This was followed by the digital era, beginning in the 1980s, with the introduction of CDs. A few conductors and orchestras dominated the market — Herbert von Karajan re-recorded everything he had previously done in 78s and on LPs, and not always to the better. 

The new century has been marked by an entire new generation of soloists, better trained and technically more perfectly accomplished than most of the great old names, and they have made some astonishing recordings. What I sometimes suspect is that they lack the understanding and commitment to what the music means, intellectually. Facile and beautiful and technically perfect, but not always as deep. 

And now, we live in an age that overlaps that, of historically informed performance, in which everything is played lighter, faster and punchier — and all the nobility is squeezed out as suspect and as fogeyism. 

From the first era, we have two recordings by Kreisler, from 1926 with Leo Blech conducting and a second from 1936 under the baton of John Barbirolli. You might think the later recording was in better sound, but they are pretty equal in that way. Clearly they are old scratchy recordings, but the brilliance of Kreisler shines through anyway. In many ways, these are my favorite recordings of all. Kreisler has a warmth, a beauty of phrasing and a nobility that is exceptional. 

There is also the Bronislaw Huberman with George Szell from 1934 in surprisingly good sound, and by Jascha Heifetz with Arturo Toscanini from 1940 that some prefer over his later LP one with Charles Munch. 

From the second era, there were many great performances. Four I would never do without are my favorite in good sound, Yehudi Menuhin and Wilhelm Furtwangler from 1953; the second Heifetz recording, from 1955 with Munch; the one that is a consensus reference recording, Wolfgang Schneiderhan with Eugen Jochum, from 1962; and for utter beauty of tone, Zino Francescatti with Bruno Walter, from 1961. 

Also, later in the LP era, some big names with some big sounds: Isaac Stern with Leonard Bernstein (1959); Itzhak Perlman with Carlo Maria Giulini (1981); Pinchas Zuckerman with Daniel Barenboim (1977); and Anne-Sophie Mutter with Karajan in her first recording of the work (1979). Any of these is a first rank performance in good sound, and define what the Beethoven Violin Concerto should be.

Among the younger violinists, there is plenty of good playing, but fewer deep dives. You still find the old grandeur with Hilary Hahn and David Zinman; and clean musicianship with Vadim Repin and Riccardo Muti; and Kyung Wha Chung and Simon Rattle; and Leonidas Kavakos conducting and playing. 

All the famous fiddlers of the golden age made multiple recordings of the concerto, the Oistrakhs, the Milsteins, the Sterns, the Perlmans and Grumiauxs and Szeryngs. And mostly, they were consistent across performances, with different orchestras and conductors. But Anne-Sophie Mutter re-recorded the concerto with very different results. In 2002, she played it with Kurt Masur and the New York Philharmonic and gave us a complete re-interpretation of the concerto. Some loved it; some hated it; few were indifferent. I love it. 

It is one of several outliers among interpretations. You can always count on Niklaus Harnoncourt (aka “the Wild Man of Borneo”) to be wayward, and his recording with Gidon Kremer is peculiar by including a piano in the first-movement cadenza. Why? Beethoven didn’t write a cadenza for his violin concerto, but he did write one for his piano transcription of the work, and Kremer used the piano version to reverse-engineer a version for violin, but left in a supporting piano (and tympani part). That still doesn’t answer a “why.” 

Christian Tezlaff recorded a version with Zinman and the Tonhalle Orchestra Zurich that, while on modern instruments, comes highly inflected by the original instrument ethos. It is beautiful in its way, but it is fast and zippy.

I also include a personal favorite with little circulation: A budget-label recording by the Hungarian violinist Miklos Szenthelyi. I saw him live and I’ve never seen anyone with such perfect posture or so fine a tone. I can’t recommend it for everyone (and it probably isn’t available anymore, anyway), but I have a soft spot in my heart for it.

—The next big concerto comes some seven decades later with Brahms Violin Concerto in D, op. 77 from 1878. Brahms was clearly modeling his concerto and its mood on Beethoven’s. He wrote it for his friend, Joseph Joachim, who was also the violinist who popularized Beethoven’s concerto after years of neglect. 

The Brahms concerto is more genial and has always been popular with audiences. There are as many recordings of it as there are of its predecessor, including a version by Fritz Kreisler that is still worth listening to, through the scratches and clicks of a recording made in 1936. 

Of all of them, these are my favorites, that I listen to over and over, and from all the different eras of recording. 

The Heifetz is quick, dead-on, energetic and exciting. He is sometimes thought of as cool and unemotional, but I think instead that it is white-hot. The Szeryng recording is the one I’ve had the longest and listen to the most — it’s my go-to recording, but that may just be that I’m so used to it, it has imprinted on my mind. The polar opposite of Heifetz is Oistrakh, which is rich and warm, with Szell providing the secure setting for the jewel violin. Of more recent recordings, Hilary Hahn is utterly gorgeous. It gives lie to the myth that only “historical” recordings are great.

—Chronologically, the next in line is Sibelius’ Violin Concerto in D-minor, op. 47 from 1905. If ever music required the ice of Heifetz, it is Sibelius’ concerto, which sounds like a blast from the Arctic. His recording, from 1959 is riveting. But so is the warmer version with Isaac Stern and Eugene Ormandy and the Philadelphia Orchestra from 1969. We may forget what a stunning and brilliant violinist Stern was, if we only knew him from his later years, when his intonation went south. In Sibelius, he is one of the great ones.

Of the modern era, Perlman with Andre Previn, from 1979, has all of Perlman’s grand personality and character, with technical perfection. But the one I listen to most, and with total love, is by Anne-Sophie Mutter, also with Previn, from 1995. 

I need to note, somewhere in this rundown, that the list of dependable fiddle stars are just that — dependable. If Mutter is my favorite here, that doesn’t mean that you aren’t getting the goods from Oistrakh, Vengerov, Hahn, Mullova, Bell, Chang, or Zuckerman. Whether it is Beethoven or Bartok, you will not likely be disappointed. I am here listing just my own favorites. My own taste. 

—And my own taste runs strongly to the Elgar Violin Concerto in B-minor, op. 61 from 1910. This is a testament to my own growth and change. There was a time when I wouldn’t touch Elgar with a 10-foot phonograph stylus. I found him stuffy and boring. But that was because I hadn’t really heard much of his music. Then, I heard Steven Moeckel play it with the Phoenix Symphony and was swept away. I discussed the concerto with Moeckel and he advocated for it with devotion — indeed, it was his insistence that the the Phoenix Symphony tackle it that made the performance happen. (Moeckel also has a CD out with the Elgar violin sonata that makes a case for it, too.)

This is the longest concerto on my list, but also one that I have to listen to with complete attention from beginning to end. It speaks to me with a directness that I recognize. It is rich to overflowing and absolutely tears my heart out. 

My go-to recording is also the oldest, by English violinist Alfred Sammons, made in 1929 with Henry Wood conducting. It has the greatest breadth and depth of any I have heard, in sound that is not as bad as its birth year would imply. A famous early recording was made with Yehudi Menuhin under the baton of the composer, that should show us how the composer meant it to go — if only Elgar were a better conductor. 

The concerto kind of disappeared after that, until the young fiddler Kennedy (he went by only one name back then) came out with a best-selling version in 1984 with Vernon Handley and the London Phil. It is still Kennedy’s best recording (he went pop soon after). 

But my favorite remains Hilary Hahn and the London Symphony with Colin Davis. Rich, warm, and in truly modern sound, it breaks my heart every time. 

—Speaking of broken hearts, there is no more personal utterance than Alban Berg’s Violin Concerto from 1935. Subtitled, “To the memory of an angel,” it commemorates the death of 18-year-old Manon Gropius, daughter of Alma Mahler and Walter Gropius. It is also the most listener-friendly piece of 12-tone music ever written, as Berg managed to cross atonality with tonal music in a way so clever that doctoral dissertations are still being written about it. 

In two movements, it is blood-curdling in parts, and soul-soothing in others. Every emotion in it seems authentic, and not conventional. It is one of those piece of music you cannot ever just put on in the background — you have to listen and you have to invest yourself in it completely.

It was commissioned by violinist Louis Krasner and we have a performance by him conducted by Berg’s colleague Anton Webern, from 1936, which should demonstrate the most bona fides, despite the poor sound quality. 

I first learned the piece listening to Arthur Grumiaux and it is still one of my favorites. Yehudi Menuhin played it with Pierre Boulez, who brings his own authority to Second Vienna School music.

But the one I listen to now, over and over, is Mutter, with James Levine and the Chicago Symphony. This is serious music for the serious listener.

—At roughly the same time, Bela Bartok wrote his Violin Concerto No. 2, from 1938. In three movements, it was commissioned by Zoltan Szekely and first recorded by him with Willem Mengelberg and the Concertgebouw Orchestra. It is a recording of the world premiere and has authority for that reason alone, other than that it’s a great performance, although hampered by horrible sound.

Much better sounding, and one of the great recordings, is by Menuhin and Wilhelm Furtwangler and the Philharmonia, from 1954. It has always been my reference recording. Good sound for the era and great performance. 

Isaac Stern is also great, with Bernstein, and his performance is usually paired with Bartok’s lesser-known and seldom-performed first concerto, which is youthful and unashamedly beautiful. 

And I wouldn’t be without Mutter’s version, with Seiji Ozawa and the Boston Symphony. 

—It would be hard to choose which is Dimitri Shostakovich’s best work, but my vote goes for his Violin Concerto No. 1, in A-minor, op. 99 from 1948, coincidentally, the year I was born. As personal as the Berg concerto, but with a powerful overlay of the political, and written under the oppression of Stalin, this is the most monumental violin concerto, probably, since Beethoven. When well-played (and it is difficult), it drains you of all the psychic energy you can muster. 

It is the bottom line on all seven of these concertos that we are meant to listen to them with the same beginning-to-end concentration that we would spend on poetry or defusing a bomb. They are not “put it on while I do the dishes” music, but life-and-death music. 

And that is what you get with David Oistrakh, who was the originator of the concerto and a friend of Shostakovich. He recorded it multiple times, but the first, with Dimitri Mitropoulos and the New York Phil, from 1956 is still the greatest one, the most committed. It is the one I listen to when I want to really dive deeply into what the music means, and come away shattered with the realization of all the horror of the 20th century. 

Performances by Lydia Mordkovitch and Dmitry Sitkovetsky are in modern sound and also brilliant. Hahn is especially well-recorded, with Marek Janowski and the Oslo Philharmonic. 

But the Oistrakh. If the concerto was personal to the composer, it was to the violinist, too. They both had known it all, seen it all, suffered it all. 

And so, these seven concertos — seven sisters — seem a notch above the rest, in terms of seriousness and execution. You should have all of them in performances that express all the humanity that is packed into them. These are my suggestions; you may have your own. 

Click on any image to enlarge