Archive

Tag Archives: history

I’ve been to the Louvre in Paris a number of times, but no matter how long I spend there, I never feel as if I’ve seen more than two percent of it. It is vast. It is the largest museum in the world, with 782,910 square feet of floor space (topping the No. 2 museum, St. Petersburg’s Hermitage, by more than 60,000 sq. feet) and a collection of more than 600,000 pieces. 

It’s where you go to find the Mona Lisa, the Winged Victory of Samothrace and the Venus de Milo.

It’s one of the oldest museums around, but never seems quite finished. It began as a royal palace in the 12th century, and has been added on to, parts burned down, parts replaced, and even a glass pyramid added to the top. 

When Louis XIV moved the court from the Louvre to Versailles in 1682, the building became a warehouse for kingly treasures and much of his art collection. in 1699, the first “open house,” or salon was held, and for a century, the royal academy of art was located there. 

The French Revolution ended the monarchy, and all the art once owned by the king became public property, and in 1793, the new government decreed that the Louvre should be open to the citizens as a free art museum. 

But soon after, the collection expanded exponentially, as Napoleon Bonaparte conquered half of the continent, and sent back to Paris a good deal of the art from conquered lands. He even had the museum renamed Musée Napoléon. That didn’t last, but neither did Napoleon. 

Over the 19th century, the museum collection grew, from bequests, purchases and colonial expropriations. For a while, it included a whole section of Pre-Columbian art from the New World, but that spun out into its own museum, leaving the Louvre for the Musée d’Ethnographie du Trocadéro in 1887; in 1945, the Louvre’s extensive collections of Asian art were moved to the Guimet Museum; and by 1986, all the museum’s art made after 1848, including Impressionist and Modernist work, was transferred to the Musée d’Orsay, a refurbished railways station. It seemed the Louvre kept bursting its seams. 

Then came François Mitterrand. Serving as French president from 1981-1995, Mitterrand conjured up the Grand Project to transform the cultural profile of Paris, with additional monuments, buildings, museums, and refurbishment of existing locations. Taxes were raised to accomplish this project, said to be on a scale that only Louis XIV had attempted. 

Part of this plan was the Grand Louvre, to remodel and expand the museum, and to regularize (as much as possible) the maze and warren of galleries in the old accretion of palace rooms. The most visible of the changes was the addition of the glass pyramid in the center courtyard of the palace. It was designed by architect I.M. Pei and although it has long become part of the landscape of the museum, it still angers many of the country’s more conservative grouches. In 2017, The American Institute of Architects noted that the pyramid “now rivals the Eiffel Tower as one of France’s most recognizable architectural icons.” 

The entire central underground of the courtyard was remodeled to create a new entrance, and to attempt to make sense of the confusion of corridors, rooms, staircases and doorways. It was completed in 1989. 

Now, one cannot think of the Louvre without its pyramid, but speaking as a visitor, while the Hall Napoléon (the underground foyer) has made some sense of the confusion, I cannot honestly claim the chaos has been tamed. The museum remains a labyrinth and you can be easily lost. 

And, unless you have budgeted a month or more to spelunk the entire museum, you will need to prioritize what you want to see in a visit — or two, or three. 

Quick word: Forget the Mona Lisa. It’s a tiny little painting of little artistic note, buried under a Times Square-size crowd of tourists all wanting to see the “most famous painting in the world.” It is what good PR will get you. It may be a historically noteworthy piece as one of the very few paintings Leonardo completed, but there is much better to be seen in the museum. Don’t exhaust yourself in the mêlée

Seek out the unusual, like Jan Provost’s Sacred Allegory, from about 1490, which I like to call “God’s Bowling Ball;” or The Ascension, by Hans Memling, from the same time, which shows Christ rising into heaven, but shows only his feet dangling from the clouds. There’s some quirky stuff on the walls of the Louvre. 

One of the goals of the museum is to collect, preserve, and display the cultural history of the Western world. This is our art, the stuff we have made for more than 3,000 years, from Ancient Sumer and Egypt, through classical Greece and Rome, wizzing past the Middle Ages and brightening with the Renaissance and the centuries that followed. You get the whole panoply and see what tropes have persisted, the ideas that have evolved, the stuff of our psychic landscape. 

(See how the fallen soldier in Jacques-Louis David’s Intervention of the Sabine Women echoes in Picasso’s Guernica. One way of looking at all cultural history is as an extended conversation between the present and the past. The reverberations are loud and clear.)

You can look at the paintings on the wall and see them for the beauty of their colors and brushwork, or the familiar (or not-so-familiar) stories they depict; or you can see them as the physical embodiment of the collective unconscious. 

I have always been a museum-goer. From my earliest times as a boy going to the American Museum of Natural History in New York, through my days as an art critic, rambling through the art museums of the U.S. and abroad. There is little I get more pleasure from. 

One soaks up the visual patterns, makes connections, recognizes the habits of humankind. Recognizes the shared humanity. The differences between me and Gilgamesh are merely surface tics. When I see the hand of the Roman emperor, it is my hand. I feel kinship with all those whose works and images appear in the galleries. 

And so, if it is two percent of the Louvre I have managed to absorb, I know the rest is there, and that it is me, also. 

Click on any image to enlarge

When I was a wee lad, in the 1950s and television was about the same age, I watched the images on the screen flash by with no critical eye. It was all the same: old movies, kiddie shows, talk shows, variety shows, sitcoms — it all wiggled on the toob and that was enough. 

If there were any difference in production quality, or acting ability, it made no difference. I just watched the story, or listened to the music. The very idea that there were people behind the camera never occurred. I didn’t really even think about there being a camera. Things just appeared. I suspect this is true for most kids. It may be true for quite a few grown-ups, too. 

There were certainly programs I liked more than others, but I could not have given any reason why one and not the other. Mostly, in the daytime, I watched cowboy movies and cartoons, and in the evening, I watched whatever the rest of the family was watching. 

In all that, there were a good number of Westerns. There were those for the kids, such as The Lone Ranger or The Cisco Kid, and later, those in the after-dinner hours aimed at the grown-ups — Gunsmoke or Death Valley Days. There were also the daytime screenings of old Western movies with such stars as Buck Jones, Hoot Gibson, Bob Steele or Johnny Mack Brown. 

I mention all this because I have recently begun watching a series of reruns of old TV Westerns on various high-number cable channels, seeing them in Hi-Def for the first time. I have now seen scores of original Gunsmoke episodes and my take on them is entirely different from when I was in grammar school. I can now watch them critically.

It’s been 70 years since I was that little kid, and since then I’ve seen thousands of movies and TV shows, served a stint as a film critic, written about movies, and introduced films in theaters. I have a different eye, and understand things I couldn’t know then. 

And so, a number of random thoughts have come to me, in no particular order:

1. 

Old TVs were fuzzy; new TVs are sharp. In the old days of cathode-ray tubes, TV pictures were made up of roughly 480 lines, running from top to bottom of the screen, refreshing themselves every 60th of a second. Broadcast TV was governed by what is called NTSC standards (altered slightly over time). Such images were of surprisingly low definition (by modern standards). 

The sharpness of those early TV pictures was not of major importance because most people then really thought of television as radio with pictures, and story lines were carried almost entirely by dialog. The visual aspect of them was of minor concern (nor, given the resolution of the TVs at the time, should it have been.)

Of course, now, we watch those old Gunsmoke episodes on HD screens. And two things become apparent. 

First, is that shows such as Gunsmoke were made better than they needed to be. They were made mainly by people trained in the old Hollywood studio system, where such things as lighting, blocking, focus, camera angles, and the such were all worked out and professionally understood. They were skilled craftsmen. 

However, second, some things were designed for analog screens, and so, often, painted backdrops used, especially for “outdoor” scenes shot in the studio, have become embarrassingly obvious, when, originally, they would have passed unnoticed on the fuzzy screen. 

Outdoor scenes were often shot in studios. Dodge City, during some seasons, was built indoors and the end of the main street in town was a backdrop. Again, on the old TVs you would not notice, but today, it’s embarrassing how crude that cheat was. 

You can see it in the opening shootout during the credits. In early seasons, Dillon faces the bad guy outdoors. In later seasons, he’s in the studio. 

2. 

Because Gunsmoke is now seen on a widescreen HD screen, but were originally shot for the squarer 4-by-5 aspect ratio, the image has to be rejiggered for the new screen. There are three ways of doing this, and as they show up on current screens, they are either shown with black bars on either side of the picture, to retain the original aspect ratio, or they are cropped and spread out across the wider 16:9 space. And if so, there are two ways this happens. 

If the transfer is done quickly and cheaply, the cropping is done by just chopping off a bit of the top and bottom of the picture, leaving the middle unchanged. The problem is this often leaves the picture awkwardly framed, with, in close ups, the bottoms of characters’ faces left out. 

However, in some of the newly broadcast Gunsmokes, someone has taken care to reframe the shots — moving the frame up or down — so as to include the chins and mouths of the characters. To do this, the technician has to pay attention shot by shot as he reframes the image. 

And so, it seems as if the Gunsmoke syndications have been accomplished either by separate companies, or at different times for different series packages. You can see, for instance, on the INSP cable network, examples of all three strategies. (My preference, by far, is for the uncropped original squarer picture.)

3. 

Gunsmoke changed over its 20-year TV run. There are three main versions: Black and white half-hour episodes (1955-1961); black and white hour-long episodes (1961-1966); and hour-long color episodes (1966-1975). The shorter run times coincided with the period when Dennis Weaver played Matt Dillon’s gimpy-legged sidekick Chester Gould. Chester continued for a season into the hour-longs, but was replaced by Ken Curtis as Festus Haggen, the illiterate countrified comic relief. 

Technically, the black and white seasons were generally better made than the color ones. When the series began, TV crews had been those previously at work in cinema, and brought over what they learned about lighting, framing, editing, blocking, use of close-ups. The black and white film stock allowed them to use lighting creatively, using shadows to effect, and lighting faces, especially in night scenes, with expressive shadows. Looking at the older episodes, I often admire the artistry of the lighting. 

But when color came in, the film stock was rather less sensitive than the black and white, and so the sets had to be flooded with light generally for details to be rendered. This led to really crass generic lighting. Often — and you can really spot it in night scenes — a character will throw two or three shadows behind him from lights blasting in different directions. Practicality drowns artistry. 

Gunsmoke wasn’t alone in this: This bland lighting affected all TV shows when color became normal. It took decades — and better film stock — before color lighting caught up. (One of the hallmarks of our current “golden age” of TV is the cinematic style of lighting that is now fashionable. Color has finally caught up with black and white.) 

4. 

One of the pleasures of watching these reruns is now noticing (I didn’t when I was a little boy) the repertory company of actors who showed up over and over again, playing different characters each time. 

I’m not just talking about the regular actors playing recurring roles, such as Glenn Strange as Sam the barkeep or Howard Culver, who was hotel clerk Howard Uzzell in 44 episodes, but those coming back over and over in different roles. Victor French was seen 18 times, Roy Barcroft (longtime B-Western baddie) 16 times; Denver Pyle 14 times, Royal Dano 13 times, John Dehner, John Anderson and Harry Carey Jr. a dozen times each.

Other regulars with familiar faces include Strother Martin, Warren Oates, Claude Akins, Gene Evans, Harry Dean Stanton, Jack Elam. Some were established movie actors: George Kennedy, Dub Taylor, Pat Hingle, Forrest Tucker, Slim Pickens, Elisha Cook Jr., James Whitmore. Bette Davis, too. 

William Shatner, Leonard Nimoy, DeForest Kelley, James Doohan

And a few surprises. Who knew that Leonard Nimoy was in Gunsmoke (four times), or  Mayberry’s George Lindsay (six times — usually playing heavies and you realize that the goofy Goober Pyle was an act — Lindsay was an actor, not an idiot), Mayberry’s barber Howard McNear showed up 6 times. Jon Voight, Carroll O’Connor, Ed Asner, Harrison Ford, Kurt Russell, Suzanne Pleshette, Jean Arthur, DeForest Kelley, Werner Klemperer (Colonel Klink), Angie Dickenson, Dennis Hopper, Leslie Nielsen, Dyan Cannon, Adam West, and even William Shatner — all show up. 

It becomes an actor-spotting game. John Dehner, in particular, was so very different each time he showed up, once a grizzled old miner, another a town drunk, a third as an East-coast dandy, another as a hired gunslinger — almost never looking or sounding the same. “There he is, Dehner again!” 

And it makes you realize that these were all working actors, needing to string together gigs to make a living, and the reliable actors would get many call-backs. It is now a pleasure to see how good so many of these old character actors were. 

5. 

I have now watched not only Gunsmoke, but other old TV Westerns, and the quality difference between the best Gunsmoke episodes and the general run of shows is distinct. While I have come to recognize the quality that went into the production of Gunsmoke, most of the other shows, such as Bonanza, simply do not hold up. They are so much more formulaic, cheaply produced, and flat. Stock characters and recycled plots. 

Gunsmoke was designed to be an “adult Western” when it was first broadcast, in 1952, as a radio show, with stocky actor William Conrad as Matt Dillon. In contrast to the kiddie Westerns of the time, it aimed to bring realism to the genre. 

William Conrad as Marshal Dillon

It ran on radio from ’52 to 1961, and on TV from 1955 to 1975, and then continued for five made-for-TV movies following Dillon in his later years. There were comic books and novelizations. Dillon became a household name.

Originally, Matt Dillon was a hard-edged, lonely man in a hard Western landscape. As imagined by writer and co-creator of the series John Meston, the series would overturn the cliches of sentimental Westerns and expose how brutal the Old West was in reality. Many episodes were based on man’s cruelty to both men and women. Meston wrote, “Dillon was almost as scarred as the homicidal psychopaths to drifted into Dodge from all directions.” 

On TV, the series mellowed quite a bit, and James Arness was more solid hero than the radio Dillon. But there was still an edge to the show, compared with other TV Westerns. After all, according to True West magazine, Matt Dillon killed 407 people over the course of the TV series and movie sequels. He was also shot at least 56 times, knocked unconscious 29 times, stabbed three times and poisoned once. 

And the TV show could be surprisingly frank about the prairie woman’s life and the painful treatment of women as chattels. 

In Season 3 of the TV series, an episode titled “The Cabin,” two brutal men (Claude Akins and Harry Dean Stanton) kill a woman’s father and then serially beat and rape her over the course of 35 days, when Dillon accidentally comes upon the cabin to escape a snowstorm. The thugs plan to kill the marshal, but he winds up getting them first. When Dillon suggests that the woman can now go back to living her life, the shame she feels will not let her. No one has to know what has happened here, he tells her, but, she says, she will know. And so she tells Dillon she will go to Hayes City, “buy some pretty clothes” and become a prostitute. “It won’t be too hard, not after all this,” she says. 

“Don’t let all this make you bitter,” Dillon says. “There are a lot of good men in this world.”

“So they say” she says. 

This is pretty strong stuff for network TV in 1958. There were other episodes about racism, and especially in the early years, not always happy endings. 

Dodge City, 1872

6. 

According to Gunsmoke producer John Mantley, the series was set arbitrarily in 1873 and in Dodge City, Kansas, on the banks of the Arkansas River, although the river plays scant role in the series. In 1873, the railroad had just arrived, although in only a few episodes of the TV series is the train even mentioned. 

The Dodge City of the series is really just a standard Hollywood Western town, with the usual single dusty street with wooden false-front buildings along either side. 

In reality (not that it matters much for a TV show, although Gunsmoke did try to be more realistic than the standard Western), Dodge was built, like most Southern and Western towns, with its buildings all on one side of the street (called Front Street in Dodge) and the railroad tracks on the other. Beyond that, the river. 

Dodge City, Kansas 1888

And in general, the geography of Gunsmoke’s Kansas would come as a surprise to anyone visiting the actual city. The state is famously flat, while the scenery around Matt Dillon often has snow-capped mountains, and at other times, mesas and buttes of the desert Southwest. 

Hollywood’s sense of geography is often peculiar. So, I don’t think it is fair to hold it against the TV series that its sense of the landscape has more to do with California (where the series was generally shot) than with the Midwest prairies. 

I remember one movie where James Stewart travels from Lordsburg, N.M., to Tucson, Ariz., and somehow manages to pass through the red rocks or Sedona on the way. Sedona is certainly more picturesque than Wilcox, Ariz., but rather misplaced.

Or John Ford’s The Searchers, where the Jorgensen and Edwards families are farming in Monument Valley, Ariz., which has no water, little rain, sandy soil and no towns within a hundred miles. It is ludicrous place to attempt to farm. Of course, it is said, in the movie, to be set in Texas, but Texas doesn’t look like the Colorado Plateau at all. 

We forgive such gaffes because the scenery is so gorgeous, and because we’ve been trained by decades of cowboy movies to have a picture of “The West” as it is seen in Shane rather than how most of it actually was: flat, grassy, and boring. And often, it is not even the West, as we think of it. Jesse James and his gang robbed banks in Missouri. The Dalton Gang was finished off in Minnesota. The “hanging judge” Parker presided in Arkansas. Some of the quintessential Western myths are really Midwestern or Southern. 

So, many of the tropes of Hollywood Westerns still show up in Gunsmoke, despite its attempt at being “more realistic” than the standard-issue cowboy show. 

“Gunsmoke” studio set

Two things, however, that are hardly ever mentioned that seems germane to the question of realism on film. The first is the pristine nature of the streets. Historians have shown that, with all the horses, not only in Westerns, but even in 19th-century Manhattan, the streets were paved with horseshit. Cities even hired sanitation workers to collect the dung in wheeled bins, so as not to be buried in the stuff. 

OK, I get that perhaps on TV shows broadcast into our homes, we might not want to see that much horse manure. In reality, the dirt dumped in the studio set of Dodge City had to be cleaned out, like kitty litter, each day, or under the hot lights, the whole set would stink of horse urine. 

But the second issue relates to the very title of the show: Gunsmoke. Strangely, smoke never appears from the many guns being fired in the course of 20 years of episodes. But the series is set in an era at least a decade before the invention of a practical smokeless powder (and 30 years before its widespread usage). And so Matt Dillon’s gun should be spouting a haze of nasty smoke each time he fires at a miscreant. 

Me firing a black powder rifle

We know, from records of the time, that Civil War battlefields, and before that, Napoleonic battlefields, were obscured by clouds of impenetrable smoke, blocking the views of soldiers aiming at each other. And I know from my own experience firing black powder weapons, that each show spews a cloud of smoke from the barrel. So, why no gun smoke on Gunsmoke?

Click on any image to enlarge

This essay, now updated and rewritten, first appeared as my June, 2020 entry for the Spirit of the Senses website. 

When most of us think about our “selfness,” if we ever do, we most likely think of something interior — a psychic identity. That self is an accretion, a slow buildup of experience that memory binds into a continuous story. 

But we are not purely interior beings. We live in a physical world and our selves expand into every corner of our existence. My selfness is where and how I live, who I surround myself with, the items I buy at the grocery store, whether I wash and polish my car, or leave it to the elements. All me. 

I finished college 50 years ago, and I have changed a great deal in that half-century, and I don’t just mean the issue of losing hair on the top of my head and gaining it in my ears.

But much has remained the same. And what has remained is what I take as the essence of my self, who I am. For most writers who tackle the subject, the self is defined primarily by memory: The continuous thread of remembering from our earliest recollection to the moment an instant before this. This continuity is our self. It is what we have held onto. It remains separate from what others believe about us or their perception of our who-ness.

There is something very insubstantial about this thread of memory. After all, the past doesn’t exist; it is a reconstruction, not an actuality. And so, for many thinkers, the self is also a construction — a back-construction. We are reminded of this when we meet old friends and talk about “remember when,” and discover that our friend’s remembering is different from our own, or that they remember things we have long forgotten.

Surely the self is more than our own cogito ergo sum, recalled in memory. It is also our behavior, the sense we make of the world and how it is constructed and how it functions. It is not simply our past, but our expectations of a future. And there should be some outward manifestation of our selfness, not solely the interior rattling around of snippets of memory, strung together like a necklace of remembered events. Self is continuity. 

I began to think of such things when I woke one morning and sat on the side of the bed, facing the bookshelf on the wall in front of me. I happened to spot the slim volume of The Elizabethan World Picture by E.M.W. Tillyard, an ancient paperback that I had in college. It is a book I’ve owned for more than 50 years. It is where I first encountered the idea of the “Great Chain of Being.”

Then, I gazed over the shelves to discover if there were other books I’d owned that long, and saw Julia Child’s Mastering the Art of French Cooking, which I attempted to cook from during my first marriage, when I was still in college. Are those two books as much a part of my selfness as the memories of the old school or the failed marriage?

As I wandered through the house later that day, I pored over the many bookshelves to seek the books I’ve owned the longest, through divorces and break-ups, through four transcontinental relocations, through at least a dozen homes I have rented in five different cities. Nine cities, if you count homes from before college, which I didn’t rent, but lived with parents.

The oldest book I have owned continuously is my great-grandmother’s Bible, which was given to me when I was four years old. I also have my grandmother’s Bible, in Norwegian, and the Bible my parents gave to me when I was a boy, with my name embossed on the cover in gold. I am not religious and don’t believe any of the content scribed therein, but I also have to recognize that the culture that nurtured me is one founded on the stories and strictures bound in that book, and more particularly, in the King James version, which I grew up on and which has shaped the tone of the English language for 400 years.

Surely, completely divorced from doctrine, the KJV is a deeply embedded part of who I am. In this sense, my self extends well back beyond when I was born. Roots are deep. 

The second oldest book is one my grandmother gave me on my eighth birthday, a giant-format Life magazine book called The World We Live In. It was a counterbalance to the Holy Writ, in that it was a natural history of the world and gave me science. At that age, I was nuts about dinosaurs (as many young boys are in the third grade), and The World We Live In had lots of pictures of my Jurassic and Cretaceous favorites. It also explored the depths of the oceans, the mechanisms of the weather, the animals of the forest, the planets of the solar system, and a countering version of the creation of the world, full of volcanoes and bombarding meteorites. I loved that book. I still love it. It is on the shelf as a holy-of-holies (and yes, I get the irony).

Both the Bible and The World We Live In are solid, tangible bits of my selfness that I can touch and recognize myself in, as much as I recognize myself in the mirror.

I pulled down Tillyard from the shelf, and gathered up the several Bibles and began a pile by my desk, and went through the bookshelves finding the many books that have defined me and that I kept through all the disruption that life throws at us, with the growing realization that these books are me. They are internalized and now their physical existence is an extension of my selfness into the world.

The pile beside my desk slowly turned into a wall, one stack next to another, building up a brick-foundation of me-ness. They were cells of my psyche very like the cells of my body, making up a whole. And they began to show a pattern that I had not previously noticed. The books I’ve held on to for at least 50 years sketched a me that I knew in my bone.

I’ve kept books from 40 years ago, from 30, from 20. I’ve got books that define me as I am at 75 years old that I have bought in the past month. But the continuity of them is a metaphor for the continuity of my self.

When I was just out of college, a neighbor of my parents died and left my a pile of old books, printed in the 18th and early 19th century. There are three volumes of the poetry of William Cowper, a History of Redemption by Jonathan Edwards, a fat volume with tiny print collecting the Addison and Steele Spectators, and a single volume of Oliver Goldsmith’s History of the Earth and Animated Nature. I have Volume IV of five volumes, which contains descriptions and illustrations of birds, fishes and “Frogs, Lizards, and Serpents.”

And while my great-grandmother’s Bible gives me a sense of roots running four generations deep, these older books take those roots deeper into the culture that made me. I see myself not as a single mind born in 1948, but as part of a longer-running continuity back in time. A reminder that any single generation is simply a moment in a process: seed, sprout, plant, flower, fruit, seed. Over and over. My self grew from my mother’s womb and she from her mother’s and so on, back to a mythical primordial Eve. And my psyche grew from all the books I’ve read, and all the books that have shaped the culture that produced those books. It is a nurturance that disappears in the far distant past, like railroad tracks narrowing to a point on the horizon.

I am not here making an argument for nurture vs. nature. I am not simply the sum of the books I’ve read. Rather, the books I’ve read that have remained with me — and there are many times more that have not stuck with the same tenacity — have not only nurtured me, but are the mirror of who I was born, my inner psyche, who I AM. They are the outward manifestation of the inward being.

I have books left over from college, such as my Chaucer and my Shelley, my Coleridge and my Blake.

I have the poetry I was drawn to when first discovering its linguistic and cultural power, such as all the Pound I gobbled up.

There are the two volumes of Beethoven’s piano sonatas, edited by Artur Schnabel. I could never be without them. I read scores for pleasure just as I read words. I still have piles of Kalmus and Eulenburg miniature scores that I have used over the years to study music more minutely than ears alone can permit.

Books that have turned the twig to incline the tree stay with me, such as Alan Watts’ The Way of Zen, or the Daybooks of photographer Edward Weston, or The Graphic Art of the 18th Century, by Jean Adhémar.

I still have the Robert Graves two-volume Greek Myths that I had when taking a Classics course my freshman year, and the Oxford Standard Authors edition of Milton that I took with my in my backpack when I tried to hike all of the Appalachian Trail (“tried” is the operative word), and the photographic paperback version of the Sierra Club book, In Wildness Is the Preservation of the World.

My many Peterson Guides and wildflower books have only multiplied, but the basics have been with me for at least five decades.

The Thurber Carnival I still have was actually my mother’s book that I took from home when I went off to school. The catalog from the National Gallery of Art in Washington D.C. is now browned out and tattered and the Hokusai manga is another holy of holies.

All these have stuck to me like glue all through a life’s vicissitudes, many with ragged and torn covers, as I have myself in a body worn and torn by creeping age.

I could name many more, but you get the idea. And it is undoubtedly the same for all of us. For you, it many not be books; it might be a shirt or blouse you have kept, or maybe a blanket that comforted you when you were an infant, or your first car. These are the outward signs of an inner truth. The you who is not separate from the world, but embedded in it, connected to it, born from it and in some way, its singular manifestation.

Self is what you can’t get rid of. 

NB: The books illustrated are all some of them I’ve lugged with me for at least 50 years; anyone who knows me would recognize me in them. 

Click on any image to enlarge.

I have lived in all four corners of this country: in the Northeast until I was 17; in the South — for the first time — until I was 30; in the Northwest for a bit more than a year; back in the South until I turned 39; moved to the Southwest until I was 64; and back, finally, to the South. I am now 74. And so, I’ve lived in the American South longer than anywhere else, and while that does not give me the right to consider myself a Southerner (you have to be born here for that — maybe even your granddaddy had to have been born there for that), I have come to have a complex and conflicted love for the region. The South has a mythic hold on the psyche that no other region can match. 

Oxford, Mississippi

Perhaps the biggest problem in dealing with the American South is that there is no good way to separate the reality of it from its mythic power. Other regions have their myth, too, for sure. There is a Puritan New England, and there is the Wild West, but both of those have an element of legend to them — they are made up of familiar stories, whether of pilgrims debarking at Plymouth Rock, or Wild Bill Hickok playing aces and eights. These are stories that get repeated and we presume they tell us something about the character of the inhabitants of these regions. But the South is not built of stories, but of myth, which another thing entirely. 

There is something external about stories and legends; myth is born from that place in the psyche that Carl Jung called “the shadow.” Stories are told; myth is felt. It is something profound but unexamined — it is the sense of significance, of meaning, even if we cannot exactly put our finger on any specific meaning — the way a dream can feel significant, even if we don’t know why. 

Windsor Ruins, Mississippi

And there are at least four conflicting myths about the South, which can overlap. There is the “moonlight and magnolias,” which is now and has always been bullhockey; there is the redneck South, riven with poverty, ignorance and superstition; there is the Black South, which has its own subdivisions. And then, of course, there is the “New South,” with its Research Triangles and its civic progress. 

Yanceyville, North Carolina

The first is the bearer of the Lost Cause, a self-deluded sense that the Old South was a place of gentility and honor; the second includes both the rural farm South and the Appalachian hillbilly; the third is counterweight to both of the first two, and yet, is also the power-grid on which the first two run — it is there behind all of it. 

And lastly, if you have ever watched a new butterfly wriggle slowly, struggling out of its chrysalis, seeming to be stuck halfway, then you have a pretty good image for the New South trying to leave behind the problems of the Old. The Old is unwilling to let go. 

Because history is the foundation of Southernness. 

 

Zubulon Vance birthplace, North Carolina

When I first arrived in the South, in 1966, one of the first things I saw on driving into the campus of Guilford College, in Greensboro, N.C., was a giant banner hanging out of the third-story window of my dorm with “Forget? Hell!!” written on it in gigantic hand-scrawled letters written on a bedsheet. It was my introduction to the sense of grievance that has ridden the back of the South since the Civil War. It is a sense of being put upon by others, of having been defeated despite the assumed bravery, honor and courage of the soldiers attempting to protect the South and its heritage. Of course, this is all myth, but myth is a powerful driver. 

In Homer’s Iliad, when two soldiers meet on the fields outside Troy  and are about to beat each other into bone-snapping pulp, they first stop to tell each other their genealogy. 

“And the son of Hippolochus answered, ‘Son of Tydeus, why ask me of my lineage? … If, then, you would learn my descent, it is one that is well known to many. There is a city in the heart of Argos, pasture land of horses, called Ephyra, where Sisyphus lived, who was the craftiest of all mankind. He was the son of Aeolus, and had a son named Glaucus, who was father to Bellerophon, whom heaven endowed with the most surpassing comeliness and beauty’…” And this goes on for another 30 lines, explaining the history of his family from its origin among the gods. No one is merely an individual, but rather the tail-end of a long history, known to both the warrior himself and to his foe. 

Bell Family, Mayodan, North Carolina

This sense of history is rife in the American South, too, and the Civil War takes the place of the heights of Ilion. 

My late wife, Carole Steele, was born in North Carolina and learned about the war first-hand from her great-grandmother, Nancy Hutcherson Steele, who was 10 when it began. She had plowed the fields during the war while her father and brothers were away fighting. When she died at the age of 98, she did so in my wife’s childhood bed in a small house on the banks of the Dan River. Carole was 8 at the time.

Steele family, just after Civil War

The confluence of childhood and history formed the seed of the poetry she wrote. The blood in her veins was the blood in her father’s veins, in her grandfather’s and great-grandfather’s. History, blood and identity flowed like a river. 

“My father’s blood is always a river/ rushing to his mind/ igniting diamonds,” she wrote. She called the sense of history being alive in the genes the “long man,” an identity stretching across centuries. 

Carole described the feeling in several of her poems. One describes the feeling of being in the South and reads, in part, “It was for the wasps/ singing in the rotten apples/ under the trees,/ the sweetish smell/ of rabbit guts and/ frozen fur stuck to the bloody/ fingers/ and frost on the stubble,/ the dipper and the well,/ tobacco juice in the privet hedge,/ and liquid night/ the muted rumble/ of old voices/ at the kitchen table/ drifting up the wooden stairs”

I learned from Carole and her family, that there is usually a deep sense of belonging that Southerners feel: The second pillar of Southernness is place, and what is more, place and history are almost the same thing. A genuine love of the patch of ground where they grew up, a love like you feel for a parent. It is a love of where you were born that may not extend beyond the town or county and maybe the state. But for Carole, Rockingham County was where her father and grandfather were buried. Another poem ends: “your Daddy is a fragrance/ gathered in the peach trees/ over there.”

That fact alone meant there was an unseverable umbilical connection to that omphalos, that tiny patch of Piedmont, those trees, those creeks and rivers, those very weeds that crept over the edges of the crumbling pavement on the back roads. It is the feel of the red clay between your fingers, the blackbirds roosting by the hundreds in the oak tree. Home. 

And, in the meantime, the blood of countless slaves and freedmen enlarged the tragedy of the South. There were lynchings and later the violence of the civil-rights movement.

Mobile, Alabama

It isn’t only rancor and slaughter that give the South its sense of history, but the land itself. You can stand in a cornfield in rural Sprott, Ala., 25 miles north of Selma, and see the stand of trees at its border, knowing the trees are no more than 60 years old. And that before those trees began filling in the countryside, there were cotton, sharecroppers and poverty. A dilapidated wooden shack sits in the middle of the woods, and you wonder why anyone ever built there.

Then you recognize they didn’t. The sharecroppers’ home — just like those written about by James Agee in his Let Us Now Praise Famous Men of 1941 — was built by a cotton field, but times change and history presses on and the fields are now woods.

Sprott, Alabama

There is history elsewhere in the country, too: Bunker Hill, Mass., Fort Ticonderoga in New York or Tombstone, Ariz. But they are singular places you go to visit — somebody else’s history. The South is so full of history that its land and people seem buried under the sense of it.

The first democratic legislature in the New World was Virginia’s House of Burgesses. The author of the Declaration of Independence was a Virginian. And the Revolutionary War came to a close at Yorktown, Va.

Shiloh battlefield, Tennessee

Each state has its Civil War sites, where thousands of its men are buried. There are the street corners where civil-rights workers were hosed and beaten by police. Cotton fields where slaves were whipped. It is interesting that the one place in the country where Black and White share the most is the South.

For most Americans, history is a story told in a schoolbook. It seems removed from the lives we live. For most Southerners, history is something their grandparents did or was done to them.

And I, of course, have come late to this epic, first in 1966 when segregation was officially illegal but still largely in effect. The local barber shop would not cut a Black man’s hair; “We were not trained how to,” the barber explained, weakly and not very convincingly.

Jim Crow was so unconsciously buried in the White brain that a local ministry could, with no irony, proudly boast that it offered help and aid to “the alcoholic, the prostitute, and the Negro.” 

 

After graduating from college, I eventually found work writing for the Black weekly newspaper in Greensboro, N.C., The Carolina Peacemaker, where I found myself writing editorials for the city’s Black population. It felt strange to do so, but I never felt less than completely welcome. When I visited the African Methodist Episcopal church, I was invited in with a warmth I never felt in New Jersey — and, I might add, magnificently fed in the church basement after the service. Clearly the resistance to change in the South was a one-way thing. 

My daughter, Susie, who is also a journalist, worked in daily papers in Jackson, Miss., and Mobile, Ala., also started on a Black weekly — the Jackson Advocate, in Mississippi, where she had the same experience I did of welcome and inclusion. 

I did not find that sense in 1967 when I and a few of my college friends attended a Ku Klux Klan rally in Liberty, N.C. There, the sheriff of Forsyth County gave the keynote harangue with tales of Africans feeding their babies to crocodiles, and how Africans still had the “stub of a tail.” The smell of alcohol was pervasive, and the festivities ended with the circling and burning of a 30-foot cross, built of intersecting phone poles set alight with poured kerosene. Meanwhile, a scratchy recording of The Old Rugged Cross played on a miserable loudspeaker system. 

Later, I covered the followup to the 1979 Klan shootings in Greensboro. Klan members and American Nazi Party members were acquitted for the killings of five protesters. The city police were claimed to have colluded with the Klan, and 25 years later, the city apologized. So, the recent rash of police violence against people of color comes as neither surprise nor shock to me. 

Yet, I love the South and choose to live here. It fills my mythic life also. In the 1970s, it was the Eden from which I was exiled. I was setting roots and rhizomes in the soil of the house I shared with the woman I expected to grow old with. It was the paradise garden: In the front yard was an Yggdrasil of a shaggy, ancient black walnut tree, covered in moss. In the back yard was a pecan tree. There were two fig trees from which we ate fresh figs. There was a vacant lot next door with an old pear tree. A chinaberry grew on the street side. And a proud row of the most brilliant red maple trees along the road, changing reds throughout the year — buds, flowers, leaves, branches, each with their own ruddy glow. 

There were lilacs beside the house, wild Cherokee roses along the driveway, random chickory spreading blue along the foundation. Between our yard and the vacant lot, I counted more than a hundred species of weed — or rather, wildflower — with my Peterson Guide. I grew a vegetable garden with beans, peppers, eggplants, okra and tomatoes. 

There were mockingbirds that I trained to whistle, pileated woodpeckers that would climb the pecan tree. Crows, owls, cardinals, sparrows, redwing blackbirds, the rare ruby-throated hummingbird. Circling overhead were buzzards and hawks. There were butterflies and beetles. Ants highwayed up and down the walnut tree. A luna moth sat on the screen door. 

We lived there for seven years, digging our feet deeper into the soil, until the Archangel Michael came brandishing his sword: My love left me suddenly and I left the house. And I left the South. 

When I returned, some years later and bearing with me a numbed depression, I was taken in by my college friend and his wife, and a second, shadow-Eden was set in Summerfield, N.C., in an old house with only a wood stove for heat, and three great ancient oak trees in the back. I walked through the woods behind the house and into a small ravine — the petit canyon — and soaked my loss in the loam and leaf litter. 

The thing about depression and myth is that they play into each other. It isn’t so much that depression makes you the center of the universe, but that it wipes away everything else, leaving only yourself and your loss. You are forced to experience your life at a mythic level and for me that meant the land, its history and its people. 

New River, Ashe County, North Carolina

I recovered, moved to the Blue Ridge Mountains when I was invited by Carole. The house was on a bluff above the New River, with a dark green patch of pine trees on the hill and an unmowed grassy field on the other side of the house. I could stand at the kitchen sink, doing dishes, and watch the weather shift over the peak of Mt. Jefferson, five miles off to the north. 

Ashe County, North Carolina

Together we moved to Virginia, where Carole taught in Norfolk and I taught in Virginia Beach. Six years there, with much travel around the country. When Carole got a job offer in Arizona, we moved, lived in the desert for 25 years and when we both retired, moved back to North Carolina, to be near our daughter. 

Swannanoa Mountains, Asheville, North Carolina

It’s been 10 years now, and five since Carole died, and I have hunkered down in Asheville, at the foot of the Swannanoa Mountains, and feel as if I am where I belong. The trees and birds, the weeds and the occasional wandering black bear, the snow on top of the hills, the barbecue joints and auto parts stores. 

Age has a way of deflating myth. When I was in my 20s, the world seemed aglow, lit from within by a kind of mythic importance. The South had that glow: its people, its landscape, its history. I have come back to the South after a quarter-century in the desert. It has lost some of its oneiric power, as, indeed, the world has in general. But the South feels comfortable and human and my children and grandchildren all live here and I cannot imagine living anywhere else. I burrow in and pull its blankets around my shoulders. 

Click any image to enlarge

I have no belief in ghosts, spirits or ouija boards and I don’t believe that the past hangs on to the present to make itself palpable. But I have several times experienced a kind of spooky resonance when visiting certain famous battlefields. 

The thought re-emerged recently while watching a French TV detective show that was set in Normandy, and seeing the panoramas of the D-Day landing beaches. I visited those beaches a few years ago and had an overwhelming rush of intense sadness. It was inevitable to imagine the thousands of soldiers rushing up the sands into hellish gunfire, to imagine a thousand ships in the now calm waters I saw on a sunny day, to feel the presence in the concrete bunkers of the German soldiers fated to die there manning their guns. 

The effect is entirely psychological, of course. If some child with no historical knowledge of the events that took place there were to walk the wide beach, he would no doubt think only of the waves and water and, perhaps, the sand castles to be formed from the sand. There is no eerie presence hanging in the salt air. The planet does not record, or for that matter, much note, the miseries humans inflict on each other, and have done for millennia. 

But for those who have a historical sense, the misery reasserts itself. Imagination brings to mind the whole of human agony. 

Perhaps I should not say that the earth does not remember. It can, in certain ways. Visiting the woods of Verdun I saw the uneven forest floor, where the shell craters have only partially been filled in. Once the trees were flattened by artillery, leaving the moonscape littered with corpses. The trees have grown back, but the craters are still discernible in the wavy forest floor. 

This sense came to me first many years ago visiting the Antietam battlefield in Maryland. There is a spot there now called Bloody Lane. Before Sept. 17, 1862, the brief dirt drive was called the Sunken Road, and it was a shortcut between two farm roads near Sharpsburg, Md. All around were cornfields rolling up and down on the hilly Appalachian landscape.

The narrow dirt road, depressed into the ground like a cattle chute, now seems more like a mass grave than a road. And it was just that in 1862, when during the battle of Antietam Creek, Confederate soldiers mowed down the advancing Federals and were in turn mowed down. The slaughter was unimaginable.

You can see it in the photographs made a few days after the battle. The soldiers, mostly Southerners, fill the sunken road like executed Polish Jews. It was so bad, as one Union private said, “You could walk from one end of Bloody Lane to the other on dead soldiers and your feet would never touch the ground.”

Even today, with the way covered with crushed blue stone, the dirt underneath seems maroon. Perhaps it is the iron in the ground that makes it so; perhaps it is the blood, still there after 160 years.

Antietam was the worst single day of the Civil War. Nearly 23,000 men were killed or wounded. They were piled like meat on the ground and left for days before enough graves could be dug for them. There were flies, there was a stench. The whole thing was a fiasco, for both sides, really.

But all these years later, as you stand in Bloody Lane, the grassy margins of the road inclining up around you and the way lined with the criss-cross of split-rail fencing, it is painful to stand in the declivity, looking up at the mound in front of you, covered in cornstalks in a mid-July day. You can see that when the Yankees came over the rise, they were already close enough to touch. There was no neutralizing distance for your rifle fire to travel, no bang-bang-you’re-dead, no time, no room for playing soldier. Your enemy was in your face and you had to tear through that face with lead, the blood splattered was both Federal and Confederate, in one red pond among the furrows. In four hours on 200-yard stretch of Bloody Lane, 5,000 men were blown apart.

It is difficult to stand in Bloody Lane and not feel that all the soldiers are still there, perhaps not as ghosts, but as a presence under your boot-sole, there, soaked into the dirt.

It is almost, as some cultures believe, as if everything that happens in a place is always happening in that place. The battle was not something that occurred before my great-grandfather was born, but a palpable electricity in the air. You can not stand there in Bloody Lane and not be moved by that presence.

A similar wave of dismay overcame me at several Civil War sites: Shiloh; Vicksburg; Fredericksburg; Cold Harbor; Petersburg; Appomattox. Always the images rise in the imagination. Something epochal and terrible happened here. 

Visiting the Little Bighorn Battlefield in Montana, there are gravestones on the slope below the so-called “Last Stand,” but you also look down into the valley where the thousands of Sioux and Cheyenne were camped. 

I’ve visited Sand Creek and Washita. And Wounded Knee. That was the most disturbing. You travel through the Pine Ridge Reservation and the landscape is hauntingly beautiful, then you pull into the massacre site and you see the hill where the four Hotchkiss guns had a clear shot down into the small ravine where the victims huddled. The sense of death and chaos is gripping. The famous image of the frozen, contorted body of Big Foot  glowers in the imagination. It feels like it is happening in a past that is still present. 

This sense of horror and disgust wells up because of the human talent for empathy. Yes, I know full well that there are no specters of the victims waiting there for me, but my immediate sense of brotherhood with them resurrects them in my psyche. I am human, so I know that those dead were just like me. I can imagine myself bowel-loosening scared seeing my comrades to either side being blown to pieces and an enemy who I’ve never met and might have been friends with races toward me with bayonet stretched in front of him, eyes wide with the same fear. 

History is an act of the imagination. The most recent may be memory, but for me to know what my father went through in France and Czechoslovakia in World War II requires my identification with him, my psyche to recognize the bonds I share with him — and with all of humanity. 

So, when visitors are shaken by visits to Auschwitz or stand on the plains of Kursk, or the shores of Gallipoli, they well may sense that history as more present than past. I have had that experience. The ghosts are in me.

“I’ve been thinking a lot about evil,” said Stuart. Stuart is now 74 and he’s been with Genevieve for a good seven years now. “Lucky seven,” he calls it. We met again on a visit to New York, and were walking down Ninth Avenue on our way to Lincoln Center. Genevieve was playing there in a pick-up orchestra in a program of all new music by Juilliard students. 

“Well, not evil so much as how we personify evil.”

I guessed he was talking about images of Satan and devils. 

“Yes, there’s Satan,” he said. “And how we picture him keeps changing. In the Middle Ages, he was a monster with goat horns and a second face where his genitals should be. 

“To Dante, he was a giant with bat wings. 

“To Milton, he was a glorious angel who had lost little of his heroic luster. In popular culture, he was an opera villain dressed in red. He had tiny pointed horns and a pitchfork. 

“To modern movie audiences, he’s now a slick hedge-fund manager. 

“The less visually imaginative have a non-personal sense of evil as a force in the cosmos something like gravity — pervasive but not individualized. They feel they have escaped the primitive urge to apostrophize nature. 

“But what interests me isn’t just his appearance, but his character. Satan isn’t a single person, but a range of fictional stereotypes — maybe archetypes. There are probably dozens of Satans, hundreds if you want to count the demons and djinn of other cultures. But they all boil down to what I think are five mega-types. I figure there are five possible motivations for Satan. First, he is a sociopath and has no concern for his effects on the world, no empathy, no compassion — hollow and empty. We’ve seen what happens when a malignant narcissist is given power. His only concern is for himself. 

“Then, he is often seen as a trickster, a Loki, who gets his kicks from knocking the hats off of policemen. His role in the universe is the revivifying power of chaos, without which the world would be a stale and boring place, where nothing interesting ever happens. The side-effect of this is necessarily going to impact some people rather badly. William Blake seems to have seen Satan as this sort of being: a creator through destruction.

“More popular is Satan the con man and seducer, the profferer of the Faustian bargain, the little voice that says, ‘give in to the desire,’ the tempter of Jesus, the snake-oil salesman who knows his potion is either useless or poison. His pleasure is in knowing he is more clever than you, and hence, this Satan is motivated, in part, by vanity. 

“A small portion of theologists envision Satan as the right hand of god, without whom god would not be possible. If there is no evil, there is no good to play against it. God and Satan are coeval, co-existent and co-dependent. This is the Gnostic Satan, as important as Jehovah.  

“Finally, there is evil as ignorance. If we knew better, we’d behave better. For this point of view, Satan does not actually exist, but only our own failure to understand. We do evil because we are blind, stumbling about in the moral darkness. 

“Of course, I don’t believe any of this,” Stuart says. “It’s all just mythology. But myth is interesting. We always seem to better understand through story than through logical argument.”

I couldn’t help but notice the irony. But Stuart went on.

“I had a dream the other night, which set me off into a different direction,” he said. “In it, evil was a machine, not a person. I figured that in a Cartesian universe, a mechanistic and scientific world, evil might well follow laws of nature very like something Isaac Newton might have formulated. Such a conception would require a mechanistic mythology. And so, I tried to imagine a Satan-machine. 

“Like all mythologies, it would have to be built on the things of daily life, what we come into contact with. These are the things that color our imaginations. And so the evil machine of the 18th century wold be all gears and pulleys, spritzing steam and clanking along. Blake’s “dark Satanic mills.” 

In the 1950s, the machine would be blinking lights and spinning magnetic-tape reels. 

In 2000, it would be read-out screens and buttons to press.”

“And now?” I asked.

“Now, I think Satan would be a visually inert silicon chip, perhaps the size of George Lucas’ Death Star, working silently and invisibly to our destruction. 

“There is an impersonality to our scientific conception of the cosmos and its creation, and so, my idea of evil should reflect that, and our Satan would be technological. The evil is still there, and it has an origin, but the origin is not shaped in any way like a human being, no arms, no legs, or eyes or tongue stuck out like Gene Simmons’ or the Hindu goddess Kali. No, I am ready for a machine to be the source of all bane and baleful action.”

“OK,” I said. “But machines are manufactured. Who made this Satan-machine? Are we not right back with the proof of god by design? Is there a God in a lab coat who tinkered with silicon until he came up with this machine?”

“Hmm.” Stuart looked thoughtful. “No, it would have to be a writer. I’m imagining Douglas Adams,” he said. 

If I say we have entered a new Romantic era, you may lick your chops and anticipate the arrival of great poetry and music. But hold on. 

Nothing gets quite so romanticized as Romanticism. It all seems so — well — romantic. We get all fuzzy inside and think pretty thoughts. Romanticism means emotional music, beautiful paintings, expansive novels, and poetry of deep feeling.

Or so we think, forgetting that Johann Wolfgang von Goethe called Romanticism a “disease.” 

The surface of Romanticism may be attractive, but its larger implications are more complex. We should look deeper into what we mean by “Romanticism.”

Initially, it is a movement in art and literature from the end of the 18th century to the middle or latter years of the 19th century. It responded to the rationalism of the Age of Reason with a robust faith in emotion, intuition and all things natural. We now tend to think of Romanticism as a welcome relief from the artificiality of the aristocratic past and a plunge into the freedom of unbuttoned democracy. We read our Shelley and Keats, we listen to our Chopin and Berlioz and revel in the color of Turner and Delacroix. Romanticism was the ease of breathing after we have unlaced our corset or undone our necktie.

Yet, there is something adolescent about Romanticism, something not quite grown up. It is too concerned with the self and not enough with the community. There is at heart a great deal of wish fulfillment in it, and a soft pulpy core of nostalgia and worse, an unapologetic grandiosity. One cannot help think of Wagner and his Ring cycle explaining the world to his acolytes. Music of the Future, indeed.

I’m not writing to compose a philippic against a century of great art, but to consider the wider meanings of what we narrowly define as Romanticism.

Most importantly, one has to understand the pendulum swing from the various historical classicisms to the various historical romanticisms. Romanticism didn’t burst fully grown from the head of Beethoven’s Eroica, but rather recurs through history predictably. One age’s thoughtfulness is the next generation’s tired old pusillanimity. Then, that generation’s expansiveness is followed by the next and its judiciousness.

The classicism of Pericles’ Athens is followed by the energy of Hellenism. The dour stonework of the Romanesque is broken open by the lacy streams of light of the Gothic. The formality of Renaissance painting is blown away by the extravagance of the Baroque. Haydn is thrown overboard for Liszt, and later the tired sentimentality of the Victorians (the last gasping breaths of Romanticism) is replaced by the irony and classicism of Modernism. Back and forth. This is almost the respiration of cultural time; breathe in, breathe out. You could call it “cultural yoga.”

We tend to label the serene and balanced cultures as classical and the expansive and teetering ones as romantic. The labels are not important. Nietzsche called them Apollonian and Dionysian. William Blake personified them in his poems as reason and energy.

We are however misled if we simplify the two impulses as merely rationality vs. emotion. The twin poles of culture are much more than that.

Classicism tends to engage with society, the interactions of humans, the ascendency of laws instituted by men (and it is men who have instituted most of them and continue to do so — just look at Congress). AT its heart, it is a recognition of limits. 

Romanticism, of whatever era it reveals itself, engages with the cosmos, with history, with those things larger than mere human institutions, with Nature with a capital “N.” Romanticism distrusts anything invented by humans alone, and surrenders to those forces mortals cannot control. Romanticism has no truck with limits. 

These classical-romantic oppositions concern whether the artist is engaged with man as a social being, an individual set in a welter of humanity — or whether he is concerned with the individual against the background of nature or the cosmos.

Yet there is an egotism in the “me vs. the universe” formulation. It tends to glorify the individual as hero and disparage the community which makes life possible. 

In the 18th Century, for instance, Alexander Pope wrote that “The proper study of mankind is man.” The novel, which investigates human activity in its social setting, came from the same century. Fielding and Defoe come from that century.

The succeeding century is concerned more with man in nature, or man in his loneliness, or fighting the gods and elements. One thinks of Shelley’s Prometheus Unbound or Byron’s Manfred.

There are many more polarities to these movements in art and culture. One side privileges clarity, the other complexity. Just compare a Renaissance painting with a Baroque one. The classical Renaissance tends to line its subjects up across the canvas in a line, while the Baroque wants to draw us in to the depth of the painting from near to far. Renaissance paintings like to light things up evenly, so all corners can be seen clearly. The romanticised Baroque loves the great patches of light and dark, obscuring outlines and generally muddying up the works.

Look at this Last Supper by Andrea del Castagno. See how clear it all is. 

But the Baroque painter Tintoretto had a different vision of the same biblical event. It is writhing, twisting out into deep space, with deep shadows and obscure happenings. The Renaissance liked stability and clarity; the Baroque, motion and confusion.

One side values unity, the other, diversity. One side values irony, the other sincerity. One side looks at the past with a skeptical eye, the other with nostalgia. One side sees the present as the happy result of progress, the other sees the present as a decline from a more natural and happier past. One side unabashedly embraces internationalism, the other, ethnic identity and nationalism. If this sounds familiar, think red and blue states.

One of the big shifts is between what I call “ethos” and “ego.”

That is, art that is meant to embody the beliefs of an age, thoughts and emotions that everyone is assumed to share — or art that is the personal expression of the individual making it.

We have so long taken it for granted that an artist is supposed to “express himself,” that we forget it has not always been so. Did Homer express his inner feelings in the Iliad? Or are those emotions he (or she) described the emotions he expected everyone would understand and share? He tells of what Achilles is feeling, or Ajax or Hector or Priam — and they are deep and profound emotions — but they give no clue to what Homer was feeling.

In music, Haydn’s symphonies were written about in his day as being powerfully emotional. Nowadays, we think of Haydn as a rather witty and cerebral composer. If we want emotion, we go to Beethoven or Schubert. You cannot listen to Schubert’s string quintet and not believe it expresses the deepest emotions that its composer was suffering at the time. It is his emotion. We may share it, but it is his.

The history of art pulsates with the shift from nationalistic to international styles, from that which is specific to an ethnic or identity group, and that which seeks to transcends those limitations.

In music, Bach imitated the national styles in his English and French suites and his Italian Concerto. The styles are distinct and identifiable.

But the Galant and Classical styles that replaced it vary little from country to country. Perhaps the Italian is a little lighter and the German a little more complex, but you can’t get simpler or more direct than Mozart.

Nationalism reasserted itself in the next century, so that you have whole schools of Czech music, French, Russian. In the early 20th Century, internationalism took charge once more and for a while, everybody was writing like Stravinsky.

The main architectural style of the first half of this century is even called “The International Style.” That style is now so passé as to be the butt of jokes.

The classical eras value rationality and clear thinking, while its mirror image values irrationality and chaos.

You’re ahead of me if you have recognized that much of what I am calling Romanticism is playing out in the world and in current politics as a new Romantic age.

Nationalism is reasserting its ugly head in Brexit, in Marine Le Pen, Vladimir Putin — and in Donald Trump and his followers.

The mistrust or outright disbelief in science is a recasting of Rousseau. Stephen Colbert invented the term “truthiness,” and nothing could be a better litmus test of Romanticism: The individual should be the arbiter of truth; if it feels true, we line up and salute. In a classical age, the judgments of society are taken as a prime value. Certainly, there are those who resist, but by and large, the consensus view is adopted.

The previous Romantic age had its Castle of Otranto and its Frankenstein. The current one has its Game of Thrones and its hobbits, and wizards and witches. The 19th Century looked to the Middle Ages with a nostalgia; the Postmodern 21st Century looks to a pre-civilized barbarian past (equally mythologized) with a vision for a post-apocalyptic future. 

(Right-wing nostalgia is for a pre-immigrant, pre-feminist, pre-integration utopia that never actually existed. The good old days — before penicillin.) 

This neo-barbarianism also shares with its 19th Century counterpart a glorification of violence, both criminal and battlefield — as the huge armies that contend in the Lord of the Rings films, to say nothing of the viciousness of Game of Thrones

As we enter a new Romantic age around the world, one of dissociation, confusion and realignment, we need to recognize the darker side of Romanticism and not merely its decorative accoutrements.

We will have to accept some of those adages propounded in William Blake’s Marriage of Heaven and Hell:  “Sooner murder an infant in its cradle than nurse unacted desires.” And, “The tigers of wrath are wiser than the horses of instruction.” Is this not the Taliban? The Brexiteers? The Republican Party? And those elements in academia who want cover their ears and yell “nyah-nyah-nyah” when faced with anything outside their orthodoxy? 

Because it isn’t only on the right. The Noble Savage has come back to us as a new privileging of indigenous cultures over Western culture. The disparagement of European science, art, culture and philosophy as “hegemonic” and corrupt is just Rousseau coming back to bite us on the butt. (The West has plenty to answer for, but clitoridectomies are not routine in New Jersey. There is shame and blame found everywhere.) 

And the political right has discovered “natural immunity” and fear of pharmaceuticals, while still thinking it OK to run Clorox up the kiester. 

The last Age of Romanticism kicked off with the storming of the Bastille — a tactically meaningless act (only seven prisoners remained prison, four of them were forgers and another two were mentally ill) which inspired the French Revolution and all the bloodshed of Terror, but had enough symbolic significance to become the focus of France’s national holiday. We have our January 6, just as meaningless and perhaps just as symbolic. But perhaps that riot has more in common with a certain putsch in Munich. 

The first time America entered a Romantic age, in the 19th century, it elected Andrew Jackson, arguably the most divisive president (outside the Civil War) before Donald Trump, and certainly the most cock-sure of himself and the truthiness he felt in his gut. Facts be damned. For many of us, Trump feels like the reincarnation of Jackson, and this era feels like the reemergence of a Romantic temperament, and we may need to rethink just how warm and cuddly that truly is.

This piece is updated, expanded and rewritten from an April 2017 essay for the Spirit of the Senses

Is there anything left to say? After 5,000 years of putting it all down on clay, stone, parchment and paper, is there anything that hasn’t been said? It is something every writer faces when putting pen to paper, or fingertip to keyboard. Or even thumbs to smartphone.

And it is something I face, after having written more than four million words in my professional lifetime. Where will the new words come from?

It is also something newlyweds often fear: Will they have anything to say to each other after 20 years of marriage? Forty years? Surely they will have talked each other out.

What we write comes from a deep well, a well of experience and emotion and sometimes we have drawn so much water so quickly, it dries, but give it time and it will recharge. If no new experience enters our lives, our wells remain dry.

One friend has offered this: “That each generation thinks they know more than anybody else who has ever lived.  In a way, that’s a good thing because it allows for new ideas.”

But how new are those ideas? “I guess we have to live with a certain amount of repetition under that system,” she says. “Relying on what previous generations wrote would be so boring. Our ego demands that we pick and choose from past works if we heed them at all.”

I have a different interpretation. We never quite hit the target of what we mean; words are imprecise, concepts are misunderstood. One generation values family, the next understands family in a different way and builds its family from scratch with friends.

As T.S. Eliot says in East Coker:

Every time I put word to word, I come up short, leave things out, use phrases sure to be misinterpreted, have my motives doubted, and — as I learned many times from my readers, they read what they think I wrote and not always what I actually wrote.

And so, there is the possibility of endless clarification, endless rewriting, endless apologizing. And new words to be written.

As someone once said, all philosophy is but a footnote to Plato (who, by the way, is a footnote to the pre-Socratics), and all writing is an attempt to get right what was inartfully expressed in the past. It is a great churn.

All writing is an attempt to express the wordless. The words are never sufficient; we are all wider, broader, deeper, fuzzier, more puzzling and more contradictory than any words, sentences or paragraphs can encompass.

Heck, even the words are fuzzier. Consider “dog.” It seems simple enough, but includes great Danes and chihuahuas, Scotties and dobermans.  As a genus, it includes wolves and foxes. It also describes our feet when we’ve walked too much; the iron rack that holds up fire logs; the woman that male chauvinist pigs consider unattractive; a worthless and contemptible person. You can “put on the dog,” and show off; you can “dog it,” by being half-assed; you can call a bad movie a “dog;” at the ballpark, you can buy a couple of “dogs” with mustard; if you only partly speak a language, you are said to speak “dog French,” or “dog German;” past failures can “dog” you; if you are suspicious, you can “dog” his every move. “Dog” can be an anagram of “God.”

Imagine, then, how loose are the bounds of “good” or “bad,” or “conservative,” or when someone tries to tar a candidate as a “socialist.” Sometimes, a word loses meaning altogether. What, exactly do we mean when we talk of morality or memory, or nationality or the cosmos?

And so, every time we pick up pen to write, we are trying our hardest to scrape up a liquid into a bundle.

And so we rework those words, from Gilgamesh through James Joyce and into Toni Morrison. We rework them on the New York Times editorial page and in the high school history textbook. We rehash them even in such mundane things as our shopping lists or our FaceBook entries.

We will never run out of things to write or say, because we have never yet gotten it quite right.

An earlier version of this essay originally appeared on the Spirit of the Senses webpage on  Aug. 2, 2020. 

What do cows in India, Mexican bugs and Egyptian mummies have in common?

If you said, “Rembrandt,” give yourself a cigar.

Most of us, when think of color, think in the abstract. Color is the spectrum or the rainbow. Or the deciding factor in which car we buy. We think we know what “blue” means, or “yellow,” but that doesn’t say what particular blue or what of many possible yellows. Just an abstract approximation. Exact hues require incarnation. 

And so, for an artist, color is pigment, and pigment is ornery, peculiar and sometimes toxic, sometimes distressing, even morally questionable.

Poet William Carlos Williams wrote in his book-length Paterson, “No ideas but in things.” It was the total anti-Platonic declaration of faith in the here-and-now, the lumpy, gritty, quotidian things we can feel with our fingers or stub our toe with. I paraphrase his dictum with “No color but in things.” This is not abstract, but palpable.

A painter cannot simply decide on green or yellow, but on what pigment that paint is made from. Each acts in its own way, mixes with others differently, dilutes differently, requires a different thinner, binder or medium, displays varying levels of permanence, transparency and glossiness. The painter cannot think in abstract hues, but in the actuality of the physical world. Hands in the mud, so to speak.

The earliest pigments were dug from the earth or sifted from the cook-fire: Ochers and soot. The caves of France and Spain were painted with these pigments. 

They had to be worked into submission by the artist, grinding, mixing, adding medium and binder. His — or her (we cannot know for sure) — hands got dirty in the process. There was a smell to it, fresh loamy smell or the acrid residue of the hearth. There was a feel, gritty or pulverized, oily, or smudgy like moist clay.

So, until the mid-19th century, all paints were made from the things of this world. Soils and rocks, plants and snails. Each pigment had its idiosyncrasies and those had to be reckoned with when mixing them or placing them side-by-side. None was pure, save, perhaps, the blackness of soot.

Then, in 1856, an 18-year-old chemist named William Henry Perkin, trying to find a cure for malaria, found instead a new, synthetic purple dye — the first aniline dye. He called it “mauve,” or “mauveine.”

A decade later, the German chemists Carl Graebe and Carl Liebermann, working for BASF, synthesized alizarin crimson, making an artificial pigment that matched the natural alizarin dye that had been extracted from the madder plant. It was the first color created from an element of coal tar — a byproduct of turning coal into coke.

Apres moi, le deluge” — Since then, there has been a flood of synthetic colors, all devised in the laboratories of giant corporations. There are the aniline dyes, the azo dyes, the phthalocyanine dyes, diazonium dyes, anthraquinone dyes — a whole chemistry lab of new industrial color. Many of these new dyes and pigments were brighter and purer of hue and more permanent.

 (Not all: the new chrome yellow that Vincent Van Gogh used developed a tendency to turn brown on contact with air. Properly protected, chrome yellow is familiar as the paintjob on most schoolbusses).

Nowadays, even oil and acrylic paints with traditional names, such as burnt umber and ultramarine are likely to be produced industrially using chemical derivatives. But that shouldn’t blind us to the fact that Rembrandt or Michelangelo had to arrive at their paints through laborious and time-consuming processes.

Most pigments came to the artist’s atelier in the form of a rock or a sediment. It had to be ground down to a powder, a process normally done by an apprentice — basically an intern: “Bring me a latte, a bearclaw and the powdered cinnabar.” Being ground to a grit wasn’t enough; the poor apprentice sometimes had to spend days with the pigment between grinding stone and levigator or muller, working it into pulverized paste that could be mixed with a binder and medium and finally used by the artist on canvas.

It wasn’t until the advent of the industrial revolution and the invention of a pigment-grinding machine in 1718, that the tedious work of pigment making became doable in large quantities. And it wasn’t until the mid-19th century that prepared paints, sold in zinc tubes, made it possible for artists to buy portable paints they could carry out into the countryside to paint in the open.

But we should not forget the sometimes ancient origins of the paints used for the canvasses of the Renaissance, the Baroque — the Old Masters. This is where the Indian cows, the Mexican bugs and the Egyptian mummies come in.

First, let’s look at a few of the standard paint-sources from this pre-industrial age. Many of them have wonderful and memorable names, now largely gone out of use.

We’ll take the reds first. None was perfect, several were lethal. 

Carmine — This is the Mexican bug I mentioned above. The cochineal scale insect grows on certain cactuses in Central and South America. It is a bright violet- to deep-red color. The Aztecs called it “nocheztli,” which means “tuna blood,” and dyed the tunics of Aztec and Inca royalty.

Crimson — Before the Conquista, a European scale insect, growing on the kermes oak, provided a red dye. These insects were picked from the twigs with fingernails and processed into a scarlet dye. It was the color used to dye the curtains of the Temple in Jerusalem. Also widely used by ancient Egyptians and Romans. It was less efficiently grown and produced than the cochineal of Mexico, and so was replaced. Michelangelo used it in his paint.

Vermilion — A scarlet red form of mercury sulfide and highly poisonous, it was mined in Europe, Asia and the New World as cinnabar and was used also for cosmetics and medicine — hardly a wise use. In its mineral form, it was used to color Chinese lacquer. A finer, and redder version was first synthesized in China in the fourth century BCE, and depending how well powdered it has been ground, produces hues from orangey-red to a reddish purple that  one writer compared to “fresh duck liver.” It is still also produced by grinding cinnabar. 

The terms “cinnabar” “vermilion” and “Chinese red” are often loosely interchangeable. The finer the grinding, the brighter the red. Painter Cennino Cennini in his 15th century Craftsman’s Handbook wrote: “If you were to grind it every day for 20 years it would simply become better and more perfect.” It was the most common red in painting until it was replaced in the 20th century by cadmium red.

Dragon’s blood — Mentioned in a First-Century Roman travel guide (a periplus), it is a maroon-red pigment made from the sap of various plants, most notably the Dracaena cinnabari. Medieval sources wrote that it was made from the blood of actual dragons. It is also what gives classic violins their reddish varnish. In several folk-religions and in neo-paganism, it is a source of magical power, presumably because of its supposed connection to dragons. 

Minium — Also known as red lead, this orange-red pigment was commonly used in Medieval illuminated manuscripts. It was made by roasting oxidized lead in the air to form lead tetroxide. It is named for the Minius River between Spain and Portugal, and because this red lead was used for the small letterings and illustrations in hand-made books, it is the source of our word, “miniature.” 

Near colors of yellow, orange and purple had their sources, too. 

Gamboge — A yellow pigment formed from the resin of the evergreen Cambodian gamboge tree (genus Garcinia). Coincidentally, the name comes from the Latin name for Cambodia. It is the traditional color used to dye Buddhist monks’ robes. The pigment first reached Europe in the early 17th century. When mixed with Prussian blue, it creates Hooker’s green. A strong laxative if ingested; in large doses can cause death. 

Orpiment — A bright yellow pigment gathered from volcanoes and hot springs and is a highly poisonous compound of arsenic and was once used as an insecticide and to tip poison arrows. It was traded as far back as the Roman empire. Its name is a corruption of the latin auripigmentum or “gold pigment.”

Realgar — Realgar was, along with orpiment, a significant item of trade in the ancient Roman Empire and was used as a red paint pigment. It is an arsenic sulfide mineral and sometimes called “ruby of arsenic.” Early occurrences of realgar as a red paint pigment are known for works of art from China, India, Central Asia and Egypt. It was used in European fine-art painting during the Renaissance, a use which died out by the 18th century. It was also once used as medicine and to kill weeds, insects and rodents. Be grateful for modern medicine. 

Madder — Another dye that goes as far back as ancient Egypt, it is a violet to red color extracted from the Rubia tinctorum and related species, plants that grows on many continents, and in southern France is called garance — for those of you who love the great French film Les Enfants du Paradis. It is turned into a pigment from a dye by the process known as “laking,” and so often encountered as madder lake.

Tyrian purple — This is the purple of the Roman emperors, and is extracted from a mucous secretion from the hypobranchial gland of a predatory sea snail found in the eastern Mediterranean. It was worth its weight in silver and it might take 12,000 snails to produce enough dye for a single garment.

Blues and greens were often so close as to be made from variants of the same thing. 

Bice — Is a dark green-blue or blue-green pigment made from copper carbonates, primarily the mineral azurite, sometimes malachite. Lightened, it was often used for skies.

Smalt — First used in ancient Egypt, it is a cobalt oxide use to color glass a deep blue. The glass is then ground into a powder used as a pigment.

Ultramarine — The ultimate blue, made from the mineral lapis lazuli, found almost exclusively in Afghanistan, which, for Europeans, was “beyond the (Mediterranean) sea” or “ultra-marine.” The process of making the pigment from the mineral was complex and the final color was so highly prized, and so expensive, that its use had to be expressed in the contract commissioning a painting by Renaissance artists, less they use some less costly, and less glorious blue. 

Prussian blue — The first modern synthetic pigment, Prussian blue is iron hexacyanoferrate and a very dark, intense blue. It is also sometimes called Berlin blue or Paris blue. It is the blue of traditional blueprints and became popular among painters soon after it was formulated in 1708 — by accident when a chemist attempted to make a red dye and got blue instead. It largely replaced the more expensive ultramarine. After it was imported to Japan, it became the standard blue of woodblock prints. 

Egyptian blue — Long before Prussian blue, the ancient Egyptians manufactured a light blue pigment from calcium copper silicate, by mixing silica, lime, copper and an alkali. First synthesized during the Fourth Dynasty (ca. 2500 BCE), its use continued through the Roman period. The Egyptians called it “artificial lapis lazuli,” and used it to decorate beads, pots, scarabs and tomb walls. 

Indigo blue — The familiar color of blue jeans comes from indigo, made from the indigo plant Indigofera tinctoria. At least, it once did. Now the dye is synthetic. It is a deep, dark blue, almost black. Before the Asian indigo plant was imported to Europe, the dye was made from the woad plant Isatis tintoria. Before the American Revolution, Asian indigo, grown in South Carolina, was the colony’s second-most important cash crop (after rice), and counting for a third of the value of exports from the American colonies. Initially, European woad processors fought against the importation of Asian indigo dyes, as later, after adopting the Asian product, they fought tooth and nail against the synthetic. Progress. 

Verdigris — A green pigment formed by copper carbonate, chloride or acetate. It is the patina on the Statue of Liberty, but in oil paint, it has the odd property of being initially a light blue-green and turning, after about a month into a bright grass green.

Viridian — A darkish blue-green pigment, a hydrated chromium oxide, popularized by Venetian painter Paolo Veronese.

Sepia — a dark brown to black dye and pigment extracted from various species of squid. Most popular as an ink, it has also been used for oil paint.

You will have undoubtedly noticed how many of these pigments were poisonous. It has certainly been suggested that Van Gogh’s madness may have been caused by his habit of tipping his brushes on his tongue.

So many of these pigments relied on the unholy trinity of toxins: mercury, arsenic, and lead. Their toxicity was understood from ancient times. The cinnabar used for vermilion was mined in China by convicts, whose life expectancy was — well, who cared? They were convicts. 

The most common toxic color through history was white, which was most often lead carbonate, or flake white, aka white lead. It was easy to manufacture by soaking sheets of lead in vinegar for weeks at a time and scraping the resulting white powder off the surface of the metal. Flake white was a wonderful, opaque and brilliant white pigment. Unfortunately, it could kill, blind or make mad those who used it. Even today, older houses have sometimes to be de-leaded of their original paint in order to be sold legally. Children are especially vulnerable.

A substitute for white lead was looked for. Zinc white — an oxide of zinc — was tried, but was not as opaque or as white. Nowadays, titanium white is used, safer and nearly as good a pigment.

But, as I said at the top of this article, some of the old pigments were not only dangerous, but morally questionable.

Ivory black — made from elephant ivory, and essentially ivory charcoal, it is (or was) an intense black pigment. Nowadays, it is most often made from bones, as bone black, aka Mars black.

Indian yellow — A pigment brought to Europe from the east, it was described as being made by feeding cows solely on mango leaves, which made their urine an intense yellow, which was then evaporated into a sludge, dried and sold. The cattle were severely malnourished by this diet, and the practice outlawed. There are those who doubt this explanation of the pigment, but no one doubts the strong stench of the bolus. It is no longer made.

Mummy brown — A bituminous brown, made from ground-up Egyptian mummies, both human and feline. Popular from the 16th century, it was good for “glazes, shadows, flesh tones and shading.” In the 19th century, the supply of Egyptian mummies was so great that in England, they were used as fuel for steam locomotives. But when the actual origin of the pigment became widely known, a moral repugnance swept England and the Pre-Raphaelite painter, Edward Burne-Jones was horrified to find out what he was using, “and when he heard what his brown was made of, he gave all his tubes of this color a decent burial” in his garden.

Makes you look at all those rich, warm browns in Rembrandt with a slightly different eye.

——————————————

This blog entry is significantly rewritten and expanded from an earlier essay published on the Spirit of the Senses website in March, 2018.

Click on any image to enlarge

In addition to this blog, which I have been writing since 2012, I have written a monthly essay for the Spirit of the Senses salon group in Phoenix, Ariz., since 2015. I was, at various times, a presenter for the salon, which arranges six to 10 or so lectures or performances each month for its subscribers. Among the other presenters are authors, Nobel Prize-winning scientists, musicians, lawyers and businessmen, each with a topic of interest to those with curious minds. I recently felt that perhaps some of those essays might find a wider audience if I republished them on my own blog. This is one, from May 1, 2020, is now updated and slightly rewritten.

Imagine Persia — Then think of Iran. 

Very different places occupying the same geographic location. The names of places carry a kind of emotional scent that surrounds them. Persia has an exotic perfume; Iran rather stinks to American minds as moldy bread.

Persia is a land of legend of djinn, of harems, and magic carpets; Iran rather has its mullahs, its chador, and its Revolutionary Guard. Persia had its Omar Khayyam and his “The Bird of Time has but a little way to flutter — and the Bird is on the Wing.” Iran has religious fundamentalism and “Death to America.”

Certainly the political situation has changed radically over time and that contributes to our different perceptions of the same country, but the names we use conjure up very different associations, too, and not just for Iran, but the names we use around the world and especially, over time. Most locations on the globe have born a variety of toponyms over the ages. Some of these names are better for journalism, some for poetry.

The same land that we now know as Iran was once called Parthia. Once called Media — land of the Medes — once called Ariana, at another time, the Achaemenid Empire. In the Bible, it is Elam. (The borders are never quite the same; borders are notoriously fugitive.) There are other names, too, all accounting for parts of what are now The Islamic Republic of Iran: Hyrcania; Bactria; Jibal; Fars; Khuzestan; Hujiya; Baluchistan.

Some of these names, such as Baluchistan and Bactria, have a kind of exotic emotional perfume and remind us of the Transoxiana of folklore and half-remembered, half-conjured history. Samarkand and Tashkent; Tales of Scheherazade or Tamurlane, stories recounted by Richard Halliburton or Lowell Thomas. One thinks of old black and white National Geographic magazines.

Countless Victorian paintings depicted a romantic Orientalized version of seraglios, viziers, genies, pashas, often with women in various states of undress.

I have long been interested in this nomenclatural perfume, and how the names of places conjure up emotional states. The Sahel, Timbuktu, Cappadocia, Machu Picchu, Angkor Watt, Bali, Madagascar, the Caspian Sea, Tristan de Cunha, Isfahan. You listen to Borodin’s In the Steppes of Central Asia or his Polovtsian Dances, or Ippolitov-Ivanov’s Caucasian Sketches, or Rimsky-Korsakov’s Scheherazade. Watch the Cooper-Schoedsack 1925 silent film documentary of the annual Bakhtiari migrations in western Iran, Grass

There are Paul Gauguin’s brown-fleshed vahines from Tahiti, or the Red Fortress of Delhi, or the Taj Mahal. 

All have taken up residence in our subconscious imaginations. Places we likely will never visit except in art or literature. We watch Michael Palin and vicariously sail across the Arabian Sea on a Dhow, or look south from the Tierra del Fuego towards the icy basement of the planet. We read Herodotus, Marco Polo or Ibn Batuta. The best writing of Charles Darwin can be found in his Voyage of the “Beagle”. Or Melville’s Encantadas

And how often those aromas and scents are ambiguous as to be unplaceable. Where, for instance, is Bessarabia? What about Saxony? I have written before about how borders change over time, and the names of places change along with the borders, but here I am writing about the emotional resonances of those place names.

Saxony, Westphalia, Silesia, Franconia, Pomerania, Swabia, Thuringia: These are names from history books, but we are quite unlikely to know where to spot them on a map. They are all sections of Germany and Eastern Europe that have been subsumed by more modern nations, but a few centuries ago were their own kingdoms, principalities and dukedoms. Some reappear as regions or counties in larger nations, but some are pretty well evaporated. Saxony, for instance, as it exists now as a part of Germany, was originally a separate nation, and not even in the same place where the current Saxony lies.

The older names often have a more exotic connotation than the current names. Siam brings to mind Anna and Yul Brynner; Thailand may elicit thoughts of sex tourism. Abyssinia is a place of Solomonic apes and peacocks; Ethiopia is a nation that went through the Red Terror and famine of the Derg. Burma had its Road to Mandalay, its Kayan women with their elongated brass-coiled necks or even George Orwell’s “Shooting an Elephant,” but Myanmar brings to mind military rule, extreme xenophobia and Rohingya genocide.

Sri Lanka used to be Ceylon, but it was also known as Serendip, from which we get the word “serendipity.” Both “Ceylon” and “Serendip” derive from the ancient Greek word for the island, Sielen Diva. And according to legend and literature, it was originally named Tamraparni, or “copper colored leaves” by its first Sinhalese king, Vijaya. That name becomes the more common Taprobana.

The older names are almost always more resonant, more perfumed, which is why they show up so often in poetry and literature. Where have you heard of Albion, Cambria, Caledonia, Hibernia or Cornubia, but in verse? England, Wales, Scotland, Ireland and Cornwall just don’t have that literary heft. It’s hard enough for non-Brits to keep straight the difference between England, Britain, Great Britain, and the United Kingdom or UK.

If you’ve ever wondered what the ship Lusitania was named for, that was the former name for what is now Portugal. When James Joyce talks about Armorica in Finnegans Wake, he is using the old name for Brittany. Firehouse Dalmatians are named for the former Roman province located across the Adriatic Sea from Italy and now part of Croatia.

Eastern Europe is a coal bucket of forgotten or half-remembered toponyms. These places don’t translate one-for-one with modern nation-states, but across the map from Poland through Ukraine and down to Romania you find such redolent names as Pannonia, Sarmatia, Podolia, Wallachia, Pridnestrovia, Bohemia, Moravia. All of which makes the region a fertile spot to locate a fictional country when you want to write a spy novel or film comedy. Just make up a name that sound vaguely plausible.

Of the following, only one has ever been real. The rest are made up. Can you pick the genuine from the bogus?

If you picked Ruritania, a slap on the wrist for you. You have probably heard of it, but it is the fictional country that Anthony Hope used to set his 1894 novel The Prisoner of Zenda. It has since been used myriad times as a stand-in for any small nation in a movie or book.

(Other fictional countries that show up on celluloid: Freedonia and Sylvania from the Marx Brothers’ Duck Soup; Tomainia, Bacteria and Osterlich from Charlie Chaplin’s The Great Dictator; Moronica in the Three Stooges’ You Nazty Spy. There are many more.)

The ringer in the question is Ruthenia, which was a real name for a real place in Eastern Europe, now parts of Hungary and Ukraine. As for the others: Brungaria is from the Tom Swift Jr. series of boys’ books; Estrovia is from Charlie Chaplin’s film A King in New York; Lichtenburg is from the 1940 film, The Son of Monte Cristo; Pontevedro is from operetta and film, The Merry Widow; and Grand Fenwick is from the Peter Sellars film The Mouse That Roared.

There are names for mythical places, too, and they really carry their exoticism well: Atlantis; El Dorado; Shangri-La. Less well known, but once more current are the lost continents of Mu and Lemuria, both popular with cultists, and the sunken Arthurian country of Lyonesse and the drowned city of Ys.

But even real places have their exotic past. What we now call Mexico was once Aztlán. Iceland was once the almost legendary land of Thule. What we know as Xi Jinping’s China was to Marco Polo, Cathay. There is more incense to that than the more modern smog-choked superpower. Properly, Cathay was the northern part of modern China during the Yuan dynasty; the south was called Mangi. Shangdu is the modern name once transliterated as Xanadu. It has gone the way of Ozymandias.

Ruins of Xanadu

Turkey wants to be part of the European Union and is a NATO member, but in the far past, we knew the part of it east of the Dardanelles  as Asia Minor. But even that part was originally known by its regions: Anatolia in the east; Bithynia in the northwest; Cilicia in the southwest; Pontus in the northeast; and Galatia in the center (that’s who the New Testament Galatians was addressed to). The nation’s current capital is Ankara, but how much more soft and silky is its earlier incarnation as Angora?

The Middle East is now divided up in a jigsaw created after the world wars. What was The Holy Land is now Israel and its surrounding lands, which used to be aggregated as Palestine. But that whole end of the Mediterranean used more commonly to be called the Levant. I love those old terms: The Levant east of the sea and the Maghreb along the sea’s southern coast west of Egypt.

Hawaii used to be the Christmas Islands, counterweight to Easter Island. But speaking of counterweights: Tonga used to be the Friendly Islands and to their east is Niue was once Savage Island. (“Niue” translates as “Behold the Coconut”).  Back in the Atlantic, the Canary Islands were latterly the Fortunate Islands.

Nations like to attempt to make their own emotional perfume, with more or less success. Some nicknames are quite familiar: Japan is “The Land of the Rising Sun;” England is “The Land of Hope and Glory;” Ireland is “The Emerald Isle.” Norway is “The Land of the Midnight Sun.” Some nicknames aren’t particularly glorious. Italy is “The Boot;” France is “The Hexagon.” Some are just descriptive: Australia is “The Land Down Under;” Canada is “The Great White North;” Afghanistan is “The Graveyard of Empires.”

States have nicknames, too. Alaska has a bunch of them: “The Last Frontier” is printed on license plates. But others are less chamber-of-commerce-ish: Seward’s Ice Box; Icebergia; Polaria; Walrussia; the Polar Bear Garden.

Among the odder state nicknames: Arkansas is the Toothpick State; Colorado is The Highest State (which now has added meaning with the legalization of marijuana); Connecticut is both The Blue Law State and “The Land of Steady Habits;” Delaware is The Chemical Capital of the World; Georgia is The Goober State (for the peanut, please); Massachusetts is The Baked Bean State; Minnesota is “Minne(snow)ta;” Nebraska is The Bugeating State; New Jersey is officially The Garden State, but many call it “the Garbage State,” none too kindly; North Carolina used to be The Turpentine State; South Carolina used to print on its license plates, “Iodine Products State;” Tennessee is The Hog and Hominy State.

Cities have their nicknames, too. Some are in universal parlance. Paris is The City of Light, Rome is The Eternal City. In the U.S. we can drive from Beantown to the Big Apple to the City of Brotherly Love and through Porkopolis on to the Windy City and head south to the Big Easy and then out west to the Mile High City (again, now a double entendre), and finally to The City of Angels or more northerly to Frisco. (The full name given to Los Angeles is El Pueblo de Nuestra Señora la Reina de los Ángeles or “the town of Our Lady the Queen of the Angels.” Put that on a Dodgers ballcap.)

But there are less common and less polite names for cities, too. And some real oddball ones. Albertville, Ala., is The Fire Hydrant Capital of the World. Berkeley, Calif., is “Berzerkeley.” LA is also “La-La Land.” Indianapolis is “India-no-place.” New Orleans is also the “Big Sleazy.” Las Vegas is “Lost Wages.” Boulder, Colo., is The People’s Republic of Boulder.

You can string together toponyms and almost make poetry, or at least a song: “Oklahoma City looks oh so pretty/ You’ll see Amarillo/ Gallup, New Mexico/ Flagstaff, Arizona/ Don’t forget Wynonna/ Kingman, Barstow, San Bernardino/ … Get your kicks on Route 66.”

“I’ve been to Reno, Chicago, Fargo, Minnesota/ Buffalo, Toronto, Winslow, Sarasota/ Wichita, Tulsa, Ottawa, Oklahoma/ Tampa, Panama, Mattawa, La Paloma/ Bangor, Baltimore, Salvador, Amarillo/ Tocopilla, Barranquilla, and Padilla, I’m a killer/

“I’ve been everywhere, man/ I’ve been everywhere.” 

But I ain’t been to Timbuktu.