Part 1 The lie at the start

The Mississippi River begins with a lie. This probably shouldn’t be surprising, since it begins in the land of Paul Bunyan and Babe the Blue Ox. One should expect some tall tales.

But the layer upon layer of distortion, misunderstanding and outright fib is quite astonishing.

The Mississippi is widely accepted as beginning at Lake Itasca in northern Minnesota. It is a smallish lake by the state’s standards, but especially attractive, surrounded by a state park that was established in 1891; the land  has been protected for a long time.

A Minnesota tourism brochure gives one origin of the lake’s name: Itasca was the daughter of the mythical Ojibwe Chief Hiawatha. She was stolen by the ruler of the spirits of the dead to be his bride and live in the dark underworld. Her tears, shed in sadness for the world she was forced to leave behind, flowed together, forming Lake Itasca and poured out into the mighty  Mississippi.

A very pretty tale, but a complete fabrication.

The Paul Bunyan version says that a gigantic water wagon pulled by Babe sprung a leak and created Lake Itasca.

In reality, the lake was formed by the retreat of glaciers and was named by its supposed discoverer, Henry Rowe Schoolcraft, who, in the early 1800s was one of an army of “discoverers” out to find the headwaters of the great river.

Schoolcraft had once accompanied an expedition led by Lewis Cass as Cass searched for the “true source” of the river. Cass discovered one of the thousand lakes in the area that feed into the river and named it Lake Cass and pronounced it the source of the source of the Father of Waters.

Twelve years later, in 1832, Schoolcraft, in the Indian Service, came back to Minnesota and took it on himself to prove Cass wrong and take the honor for himself.

He was only one of dozens searching for the mythical “source:” Father Louis Hennepin, Antoine Auguelle, La Salle, British surveyor David Thompson, Zebulon Pike, Giacomo Beltrami, among others, each claimed at one time or another to have found the “true source.”

At least one explorer published findings that later turned out to be pure fraud: He had invented a lake and claimed supremacy for it.

There was a veritable frenzy of adventurers looking for true sources at the time: Explorers looked for the source of the Nile, of the Amazon, of the Congo. Just a few years before, Lewis and Clark had traveled to the source of the Missouri. It was in the air.

Well, Schoolcraft knew from his earlier expedition, that there were feeder streams to Lake Cass. He figured he could ride up one to find the ultimate source. He persuaded an Ojibwe Indian named Ozaawindib to take him to a lake he had heard of that the Indian had assured him was the real source of the river. Together they went and found it and Schoolcraft named it Itasca, compounding the name from the middle syllables of the Latin phrase, “Veritas caput,” or “true head.”

So much for the Indian princess.

Ah, but the shenanigans have only begun. Schoolcraft slyly ignored the fact that there are feeder streams entering his own choice for the ultimate lake.

“They are too small to count,”  Ozaawindib told him. Schoolcraft said he took the Indian’s word for it and never bothered to check out the feeder streams.

This was the same mistake the dozen other adventurers had made, and their claims had all been superseded by Schoolcraft’s. Schoolcraft turned out to be lucky and although his claim was no better researched than anyone else’s, he had the good fortune ultimately to be accepted.

Schoolcraft wrote a book about his adventure and claimed the credit for the discovery, ignoring the fact that it was Ozaawindib who knew about the lake and guided him there. Perhaps Ozaawindib should be credited with discovering the source of the Mississippi.

But the real lie is that there is a source to any river, let alone one as huge as the Mississippi. The river is more in the nature of a giant old oak tree, and who is to point to one twig end of one branch on one bough and say, “This is the beginning of this oak tree.”

The fact is, the river is not a single course, but the confluence of thousands of branches that eventually coalesce into the great muddy trunk that dumps into the Gulf of Mexico.

It is only an accident of history that the river that ends at the Gulf isn’t called the Ohio, which contributes more water to the river than the Mississippi and Missouri combined; or called the Missouri,  which is by far the longest tributary and, if the river system had not been discovered piecemeal, would have been the Mississippi from Montana to Louisiana. By rights, the headwaters of the Mississippi could as well be declared in Yellowstone National Park.

The weak sister of the three largest branches, flowing from the north and Minnesota, is only by default the Mississippi.

But there is still one more delicious lie in this tale.

For it turns out that the beautiful cascading waters that spill over the rocks at the edge of the lake are also a fabrication.

A hundred years after Schoolcraft claimed its discovery, a park superintendent took it on himself to improve reality. It turns out that the lake outlet was a swampy, muddy morass, only gradually sorting out into a stream.

“Since the water is sluggish at this point,” he wrote in a report, “all the debris and wild grasses form there. This is, indeed, a sight that is not becoming to such a great river.”

His idea was to make a hidden concrete dam that would direct the flow of water through a channel he wanted to dig.

“It will take 2,000 loads of sand and gravel. I can say our river can be built up to a point of beauty and also have the running effect of water that will really make it the Source of the Father of Waters.”

So, with the help of the National Parks Service and the Civilian Conservation Corps: A new route and channel for the first 2,000 feet of river was created; some 40,000 cubic yards of fill coaxed the water to flow in the channel; 16 acres of trees were planted “so grouped that their ultimate growth will produce a naturalistic effect;” and a concrete dam was built to stabilize the flow, with rocks placed on top to make it appear natural. These are the present headwater rocks that so many visitors tiptoe across each year.

To be continued

When you look at the pictures in those glossy travel magazines, it is always sunny in Aruba.

I don’t understand this obsession Americans have for sunshine and blue skies. Sunlight blands out the world and obliterates every mystery.

But weather is what gives a place emotional resonance. If you travel through a location for the first time and there is a thunderstorm, then, in your mind, it is always raining there. I had that experience in the flats of southern Minnesota, where it must have rained 10 inches in a half hour. I have not yet been back, so, I naturally assume it’s still raining in Pipestone.

And even when you have visited a place as often as I have visited the Great Smoky Mountains National Park, it is the constantly changing weather that most clearly defines the place.

I’ve seen the Smokies in sun and rain, in haze, drizzle, and shimmering in summer heat. I’ve seen it crystalized in ice, with each tree jacketed with a solid sheath of glass. I’ve seen the foggy snow clouds rise up from the coves to sugar the ridge crests and I’ve seen the dark diagonals of rain drop from the sunbrightened clouds to the north, while I sit dry and warm at Newfound Gap.

In short, the Smokies are weather.

And this most recent visit gave me something new once more.

I entered the park from the Tennessee side, driving up Little River Road from Townsend. The road follows an old logging railroad right-of-way along the Little River, which is perhaps even too little to be called a river.

It is only a creek, but it has gouged out an impressive gorge through the hills on the northwest side of the park. The road winds erratically along the stream path, around knolls and into coves, always with a slope of greenery above you.

At 6 in the morning, there were no other cars on the road, although I would have had a hard time seeing them if there had been. The route was whited out in fog, yet a peculiar kind of fog that was of uniform density, so that no matter in what direction I looked, I could see about 60 feet.

So I drove upstream in what I called a “sphere of clarity” with a diameter of 120 feet. It was a ball of visibility and my eye was at its center. It moved with the car, always unfolding new scenes before me and closing up those behind.

And what is more, it was a green fog: Everything was colored by the light reflected from the leafy forest above and around, making the scene soft-edged, mossy and wet with dripping dew.

Beside the road, the creek cascaded downhill  with boulder breaking up the water into white rapids overarched with willows and witch hazel. When I pulled into a turnout and got out of the car, I could hear the squawk of crows and blackbirds over the roar of the stream.

Little River Road eventually turns away from the stream and up and over Sugarland Mountain and into the main part of the park, where it joins with U.S. 441, or Newfound Gap Road, which crosses the crest of the Smokies into North Carolina.

The road climbs steeply up the north side of the mountains. At one point, it even circles above itself in a great corkscrew called The Loop. The highest point on the road is Newfound Gap, at 5,048 feet.

Here, there is a view both north into Tennessee and south into North Carolina. Or, there is sometimes a view.

On this trip, the sphere of clarity lets me see only a portion of the parking lot and a red spruce tree growing just on the other side of the stone wall that bounds the road.

That and my own frosty breath.

In such weather, it is easy to notice how limited your vision is. Sometimes the air is so clear, your sphere expands to the horizon and you don’t notice it.

 

We hear the phrase, “dead white guys,” a lot these days. It disparages a good deal of what has been taken for art and culture for the past 2000 years or so.

Now, we hear instead of women’s art, Hispanic art, Native American art. Identity, whether by gender, race, religion, nationality or even political outlook has splintered the culture, and what was once “universal” is now merely men’s art.

So, if we take there to be such a thing as men’s art, I wondered what that might be.

Surely that is what the rising tide of feminist art criticism tells us: That the so-called “canonic” art of the art history books and the institutional museums is biased in favor of colonialist, patriarchal and, well, male art.

It is seen as aggressive, competitive and with an undue emphasis on what has been labeled “quality.”

In this instance, “quality” is used as a shibboleth to exclude artists of color and gender.

And I would have a hard time arguing the reverse.

Through history, most artists recognized in the West — and by that I mean European artists — have been men.

And the art history texts have conspired to exclude the Sofonisbas and Artemesias, to say nothing of the Angelica Kauffmans and Judith Leysters.

So, if the art world has been a “old boy’s club” it stands to reason that its art must speak for old boys.

Well, let’s look first at what identity art — and identity politics — is all about.

It is assumed, first off, that each person is somehow defined by his race, culture and gender and that an art, to speak for them, must share the race, culture and gender.

I don’t want to argue that at the moment. Let’s assume it to be true.

So, let’s see how that plays out in the art of an artist generally acknowledged to be a woman: Judy Chicago.

Her first notable work was a series called “The Dinner Party” and it consisted of dozens of table settings, each built around a large dinner plate of her own creation. And each dinner plate was decorated with an ornamental vagina. Some looked more like flowers, some like, well, snails, I guess. But each was the female organ.

When they first were shown, at least, the art critics were largely hostile. “This isn’t art,” they all said, “This is a joke. Where is the beauty? Where is the craftsmanship?”

Well, they are fairly widely accepted as art now. I don’t know what else they would be. They aren’t Haviland china, that’s for sure.

Chicago’s feminist supporters declared that since the critics were, for the most part, men, they were prevented from understanding what made “Dinner Party” art.

And they also explained that what made them “women’s art” was the joining of the generative organ with the symbol of nurturance, the food that the woman prepared for her brood.

I’m making the argument particularly blunt, because, after all, I’m a man and wouldn’t understand otherwise.

On something of the same level, there must be something that counts as men’s art, that features symbols of male virtues. And here, I’m speaking of white male virtues, because black male art has its own niche filled not only by such historical luminaries as Romare Bearden and Jacob Lawrence. Or the more modern Jean-Michel Basquiat.

Or, in literature, think LeRoi Jones.

This whole problem is easier to understand, I think, if we use music instead of painting.

If you are Jewish, you have klezmer music;

If you are black, you have the blues

If you are white, you have the famous “dead white guys” — people like Brahms, Beethoven, Mozart.

What, after all, does Mozart have to say to a home-boy in the streets of South Central L.A? What does he care about proportion, harmonic rhythm and sonata form?

He has his rap music that speaks to his condition much more directly.

To his life, Mozart is irrelevant.

Or, let’s think of Beethoven and his Fifth Symphony. It used to be said always that the Great Art, with a capital “A” was universal, that is spoke for all mankind.

But what does a symphony orchestra mean to that same homeboy? Tuxedos? Money? Privilege?

And to a woman? Beethoven is badgering, hectoring, aggressive. It’s a constant wham-wham-wham of tonic-dominant forte chords.

Beethoven is men’s art, if ever there were such.

It is an art about hammering out a place in the universe, hammering out meaning.

Where is the grace? where the gentleness? Can Beethoven be said to be, in any sense, “nurturing.”

The recent spate of pop-psychology books telling us that men are from one planet and women from another would have us believe — and I’m not so sure it isn’t true — that women have a whole different way of processing information.

Women cooperate, support each other and generally attempt to create community and consensus.

Men play a life-and-death version of “king of the mountain.” They are inordinately concerned with who’s on top and where they might rank in the society. They are concerned with competition, with besting the other guy and proving they are the biggest damn gorilla on the block.

Sounds like Beethoven to me.

Who else might count as a male artist?

Michelangelo, with his “divine spark” and ranks of angels.

David, with his morality tableaux and ranks of nobles.

Picasso, surely, for his biography tells us he hated women.

And actually, almost everything in your standard Janson art history text.

The men are interested in manipulating things, altering the environment, creating a rigid social order and making “quality distinctions” that rank the artists’ products.

It is men who tell us Van Gogh is great and that Fantin-Latour is less so.

It is men who tell us Judy Chicago shouldn’t be taken seriously.

It is men who tell us that Beethoven is universal.

After all these years of so believing, I have come to question these assumptions. Perhaps Beethoven is really very provincial and speaks to white German males. Maybe that is why Debussy hated him. Maybe Wagner really does speak for the anti-Semitic; maybe Sibelius speaks only to the Finns.

As evidence for this I search myself.

I come from a Norwegian background. And sure enough, I feel something special, something very personal when I watch a Bergman film. I love a lot of cinema. Fellini is a dear; Kurosawa moves me to tears. But Scandinavian Bergman speaks to me as without a middle man; His images and words pierce directly to my heart and make an effect even before they reach my brain.

There is something perhaps even genetic to this. I recognize those iron-grey skies in his films, those tight-lipped volcanoes raging inside with never a ripple on their surface. That icy intensity is what rages through my own veins.

What doesn’t rage through my veins is the extroverted menagerie of Fellini. His Adriatic sun is not the midnight sun. Even his most sarcastic satire is optimistic. Just the opposite in Bergman, he sings with Brahms, “The grave is my joy.” There is something quintessentially gloomy about Scandinavians.

Lest you forget, the old Norse mythology is the only one, at least as far as I’ve ever found, wherein good battles evil and evil is predestined to win. The good gods will die.

I feel something of the same bleakness in Sibelius — less so in the Norwegian, Grieg, but that has more to do with his cosmopolitan leanings.

Think of the painter Edvard Munch — now there’s a cheerful guy.

The deal is that if your name is Abromowitz or Kelly or Riportella, you may not have that same blood-bond with Bergman. You may feel it for Fellini, or Rossini; or for Chaim Potok or klezmer music.

If you are black, you may feel that blood-tie to B.B. King or even L.L. Cool J.

And I have no doubt the women who tell us they feel a tie with Georgia O’Keeffe or Frida Kahlo and don’t feel the same with Cezanne or Braque are not merely making political points, but are telling us how they honestly feel.

Which brings us to the black briny problem of isolation. Does this mean that we have to throw out our Rembrandt and Renoir? Do we have to give up on 600 years or a thousand years of art history?

Does identity art render all other art null and void?

Well, the truth is that although I feel that kinship with Bergman, I truly do love Fellini as well. He speaks to me perhaps on a different level, and perhaps even a better level.

Fellini speaks to me not on the blood level, but on the level of esthetics. Even if my relatives are those in “Wild Strawberries” and not those in “Amarcord,” that doesn’t mean I don’t recognize the central core of humanity of the people in it. Fellini makes me believe in his people by force of will and imagination. He makes me believe in them as blood and flesh. He forces me to transcend the tribal.

And whether Georgia O’Keeffe speaks directly to women in a way she doesn’t speak to me I can’t know, but I can know that she speaks to me as well.

There is something in the best art that transcends race, culture and gender.

Sure, there is something on the surface that may speak more directly to others, but there is something in the core that speaks to us all.

It is, of course, that universality that I maligned earlier.

All art, of course, comes out of a culture, all art is made by a man or a woman and those roots must be the starting point. It would be as silly for a Chicano artist to mimic quattrocento Florentine art as it would be for me to write a rap song, but the soil, as they say, is where it starts.

And that soil gives birth to a great amount of art that never transcends its origin. That art can still speak to its nation or gender or color.

Most rap songs will never mean anything to me, although I can recognize in the best of it something genuine and true: Public Enemy is making real art even if others are only making headlines.

And Ditters von Dittersdorf writes concertos that really do appeal only to dead white Germans.

But the best of any art, from anywhere in the world and from any gender can communicate something genuine.

It is that nugget that Beethoven attempted to reach in his Ninth Symphony, for instance. “Alle Menschen werden Brüder” — “All men are brothers” or, in more gender neutral terms, as spoken by Willie Stargell: “We are family.”

So, what does all this mean?

First, that we must keep our ears and eyes open to that universal meaning in all art, whether it is Judy Chicago or Romare Bearden.

Second, we should always recognize that all art comes from a tribe, that white males are only one more tribe — certainly a historically privileged tribe — but that the tribe is the starting line, the checkered flag lies elsewhere.

We should never deny our origin, or attempt to suppress our origin as a seed for the art — the terroir we grow in — but we should always attempt to transcend our tribe and recognize not the differences between the tribes of humankind, but their similarities.

Sometimes, when you’re stuck on the “A” Train between 168th Street and 175th Street — that curve in the subway track that always makes the train squeal like a banshee from hell and you wish to god only dogs could hear it — and there are 43 high school kids riding home from class and making a party on the train at the top of their 86 lungs, and two or three winos are sleeping on the seats, so you have to stand, and you can’t really tell if that twitchy man who got on your car at 145th Street is carrying a knife or a letter opener — sometimes you wonder whether the 20th Century has really been worth it.

If only there were a way to go back in time to the 17th Century, or the 15th, or what the heck, the 12th Century. Ah, but there is. Just tough out the train ride to 190th Street and take the elevator up from the bowels of the station, the elevator pasted with 50 or 60 cute photographs of kitty cats and puppy dogs — and the tag-team New York Transit Authority elevator operators who taped them there and operate the elevator from behind the walls of a corrugated cardboard box “office” they imported into the elevator — and step out into the fresh air of Fort Tryon Park at the northern tip of Manhattan.

It is a short walk through the greenery and over the black basalt outcroppings to the Cloisters.

The Cloisters is a place of cold stone, vaulted ceilings and stained glass, only not stained the way the windows are stained on the “A” train.

Opened in 1938, the Cloisters is a branch of the Metropolitan Museum of Art and contains part of its Medieval collection. It is a castle on the Hudson River on a stony prominence overlooking Dyckman Street. It is a rent in time.

It is a sort of monastery built from sections of 12th and 13th Century French and Spanish cloisters, reassembled in New York City. Inside, you can set in stony silence in a Gothic chapel and watch the play of light on the stained glass windows and contemplate the tomb effigies of noblemen and their wives.

The best part of the Cloisters is that it is so far from the city’s “museum row” along Fifth Avenue that few tourists trouble to make the trip and it is thus one of the least crowded major museums in America.

But the trip will be worth your while. Inside the stone edifice are the famous Unicorn Tapestries, the gargoyled column capitals of the Cuxa Cloister and a boxwood rosary bead no larger than an inch and a half in circumference, but which splits in half and opens into a triptych of carved biblical scenes with more than 40 tiny figures sculpted into it.

I try to visit the Cloisters each time I find myself in New York, usually on the final day of my trip, when I am exhausted by the stress and energy of the great crowded noisy squealing smelly city. The Cloisters is a refuge, a place to regain the center of your being, the unmoving axis of the earth.

You first see the place — or see its tower — as you wind the paths of Fort Tryon Park past the beech trees and the retirees feeding squirrels. They are very fat squirrels.

But there, over the treetops, it seems farther away than it is. As you get closer the path takes you up to a tiny door in the bottom of the castle and you go in, up some stone steps and up to the admissions booth. Pay the fee, get your museum map and step into the 12th Century.

The main hall at the entrance is a modern piece of architecture, but evoking the style of the Middle Ages. It is a tall hexagonal vaulted dome. Off to the side are the bookshop in one direction and the Romanesque Hall to the other.  From there, you can take a side trip to the Fuentiduena Chapel, the St. Guilhem Cloister and the Pontaut Chapter House.

Each of them was collected in Europe, taken down stone by stone, with each piece carefully numbered, shipped to New York and rebuilt as part of the sprawling museum. In some cases, the originals were in ruins or only partially surviving and the museum has fleshed out the missing parts in the proper style.

On a cold October day, the sandstone is icy to the touch and the low-hanging sun outside throws the shadows of the surrounding trees up against the stained glass, making a second web of leading swaying against the motionless first

There are four things I never miss on a visit.

The first is the Gothic Chapel, which is a modern recreation of an 11th Century chapel, filled out with statues and stained glass. It is quiet as a tomb, and I always find a seat on the stone and sit quietly for 20 minutes or a half hour, waiting for the occasional visitor to pass through and bring me silence once again.

It is hard to believe a place this still can exist in a city this impatient.

In the center of the chapel is the tomb of the chevalier Jean d’Alluye, who died in 1248. On top of the sarcophagus lies the effigy of the knight, with his palms pressed together in prayer and his chain-mailed feet resting on a small stone lion. Jean had been to the Holy Land during a crusade in 1240 and had brought back what he believed was a piece of the true cross. He was originally entombed at the abbey of La Clarte-Dieu, near Le Mans, which he had built in 1239.

The second station of my ritual is the room containing the Unicorn Tapestries. These seven giant weavings depict the hunt and capture of a unicorn and are also allegorical of the suffering and crucifixion of Christ.

The last of these tapestries, the Unicorn in Captivity, is the most popular. A poster of it is sold at the gift shop. The unicorn rests in a circular corral resting on a field of hundreds of flowers, a particular style called “millefleur,” or “Thousand flowers” in French. The millefleur is more stylized than naturalistic, but it does demonstrate a quality that is particularly Medieval, and a quality I especially respond to.

The Medieval mind didn’t care much for artistic unity. They never generalized in their artwork. The later Renaissance loved to make a landscape of generalized trees, although you can never quite tell what kind of tree they mean. In a Medieval piece, like this millefleur, you can name every single plant by genus and frequently by species.

They may sit on a flat black field, but there is the strawberry, the columbine, the daisy, the iris, no two alike.

That same impulse can be found in the next stop, the intimate Trie Cloister, which is open to the weather. A cloister is a garden surrounded by a stone walkway bordered with columns. What marks it as Medieval, specifically Gothic, is that no two column heads are the same. Every one of the 20-plus double columns has a different capital, and they run from tragic to comical.

The ancient Greeks would have been horrified by this lack of unity: They built their temples so that all the columns and capitals matched. The Renaissance that came later was shocked: They, too liked uniformity of effect. They were so put off by the helter-skelter design of the age that preceded them, that they named it Gothic, which is to say, barbarian.

Yet, the profusion of styles all yoked together gives the impression of profound fecundity. The Medievals lived in a world made vivid by its variety: the wealth of animals, of plants, of social classes, of biblical stories. There are kings and saints on these capitals; there are dancing bears and demons; acanthus leaves and oak leaves. There are trade union labels and a man in a funny hat.

It is a sense of rich profusion, and one I find myself deeply sympathetic towards, which is why the cloister is a mandatory stop, again for 20 minutes or so, to soak it all in, like a deep breath of air.

The final required stop is the herb garden. I am a sucker for herb gardens,  especially the highly regimented kind that the Middle Ages were so fond of. The herb garden at the Cloisters is in the form of a cross, with beds of herbs surrounding the four central quince trees.

In October, the quince are ripening to a mottled yellow, looking something halfway between apples and pears. Their subtle fruity smell is exquisite.

Whenever I find myself in an herb garden, I always nip off tiny bits of the leaves and crush them between my fingers under my nose. The smell of the lavender, sage, thyme, borage or camphor wakes up that olfactory sense that you do your best to put to sleep in the grimy downtown.

It is for me, as it was for the Medievals, as perfect a model of Paradise as can be found on earth. Paradise is a Persian word for garden, and it is only proper that our culture has taken it over to name the single plot of earth that remains unmolested by the clutter, noise and ambition of the everyday world we inhabit.

There are dozens of other attractions at the Cloisters, every one of them worthy of your whole attention. I mention only the tapestry of the stag hunted by old age, which is the equal of the Unicorn tapestries, or the 15th Century wooden pieta, which makes the dead Christ seem more like a lifeless piece of meat than any other pieta I’ve seen, only heightening the pity we feel looking on at Mary’s sorrow.

So, if you are lucky, you can carry back with you into the city some of the stillness of the Cloisters, and until it wears off, hold onto the unmoving center of the universe.

 

There is little so depressing in the world as its conventionality. We are swamped by it, as if by a great sea wave.

Now, I don’t mean, when I say the world is conventional, that it is suburban, middle class or bourgeois. I am not merely talking about trim square lawns and grey-flannel suits. Those are conventional targets: Such things, in fact, are the conventional images of conventionality, and that gets me down just as much.

We need an unconventional view of what is conventional, or we may not notice the phenomenon at all.

And I’m not talking about conformity. That is another issue — one largely left over from the 1950s. You can see it discussed in rather conventional terms by many of the TV dramas from that “golden age” of television.

In the 1950s, there were so-called “non-conformists” who lived “unconventional lives” but they all dressed the same and they were just as conformist in their berets and turtlenecks as their elders in suits and ties. The same for hippies; the same for our goths, punks and homeboys.

There is nothing more boringly conventional than low-hanging shorts, a slogan T-shirt and a ballcap worn back-front. It is just as much a uniform as the grey-flannel suit.

I remember radio-storyteller Jean Shepherd complaining about this in the 1960s.

“If you really want to be unconventional,” he said, “wear a coal scuttle on your head.”

There is a tie between conventionality and conformity, but they are not the same thing. Conformity is acting the same as everyone else, so you don’t stand out.

Conventionalism is thinking the same as everyone else, and when you are conventional, you probably don’t even know it.

Conventions are not the province of any single social class, nation or nationality. They are a lazy habit of human thought. Conventions are things we accept without question as an accurate description of the way things are.

Songs are three minutes long. Men wear trousers; women wear skirts. Photographs are rectangular. Automobiles have four wheels. We eat with knives and forks.

Books open from the right. Stories have beginnings, middles and ends. Poetry rhymes. Brides wear white. Weeks have seven days.

All of these things are conventional; there is no obligation for them to be this way.

Some conventions serve useful purposes, such as having everyone drive on the same side of the road, but most are mere habits.

Most any widely held belief is conventional rather than active. You could take any one and turn it on its head and make a convincing argument.

We believe modern medicine is good, yet it has helped cause the overpopulation of the planet. Death is part of a healthy life, after all. Perhaps we were better off in the long run without penicillin. It has not stopped suffering but only postponed it.

We talk about species being higher or lower on the “evolutionary ladder..” Yet, there is really no higher or lower; there is only difference.

That sense that human beings are the culmination of an evolutionary teleology is quite absurd. We need to evict the squatting convention that everything is ordered hierarchically.

The reason we rely on convention so much is that it makes our decisions for us and solves problems that otherwise would vex us continually.

Convention is therefore a labor-saving device.

But are labor-saving devices all that good, in the long run? That is another convention that bears inspection. Families were certainly more tightly bonded before the proliferation of labor-saving devices freed us from having to cooperate on chores.

Convention is habit and the problem with habits is that they dull us down and dim our awareness.

And that is why we should worry about it.

For you might ask, if we are happy with the conventional, why should we be forced into an unknown we are uncomfortable with? Why should not a painting be something pretty we hang over a sofa? Why shouldn’t we wear matching socks?

But if you begin to recognize the conventionality around you, you won’t think convention all that pleasant. You will see it as the enemy. You will see it as a form of death. It makes inert a portion of life that should be perpetually active.

Convention is a substitute for being alive. It is a false path that will lead you to the point that you wake up one day and realize you have not lived.

To be most alive is to be most aware. Convention is a sleeping pill.

My wife has a simple theology. As far as she is concerned, Ray Charles is God.

This isn’t an organized religion and she doesn’t attend services. But I know what she means: When you hear Ray Charles sing, you can easily be convinced that there is a kind of divinity making itself heard through his throat.

It isn’t strictly speaking his music which causes this reaction. Some of the songs he sings are as trivial as any other pop music, the arrangements just as kitschy, and his backup musicians are often the same ones that show up elsewhere when no divinity is present.

No, it is a quality in his voice that transcends the pop music he sings. It is as if all of humanity is trying to crowd through the narrow pipe of his trachea.

What you hear is pain, joy, weariness, enthusiasm, strength, vulnerability, death and birth, all at once.

Well-trained voices of opera singers are meant to sound effortless; they know how to ease the notes across their vocal cords and project them to the back of the house. With Charles, the rasp of his voice underlines how hard the music is working to get so much meaning through so small a tube.

I’m going on about one singer-deity, but I am myself a polytheist. There are a tiny handful of others working in pop culture that bring so much to their medium that they transcend it and reach the rank of high art.

You hear something of the same going on in the tenor of Willie Nelson. Now, I am not a country-Western fan. Mostly such music gives me the worms. But Nelson is something beyond the category and every note he utters seems rife with human life.

Others on my list include Billie Holiday, John Lee Hooker, and — yes, I’m serious about this — Jerry Lee Lewis.

In each, there is an authenticity in their voice that only gets more profound as they age.

You should hear the Killer bend Somewhere Over the Rainbow into a melancholy confession of regret. I doubt he could have managed that when he was a young Turk.

And Billie Holiday, in the year before she died, worn threadbare by heroin and alcohol, sang her Don’t Explain at New York’s Plaza Hotel with Duke Ellington and you can barely stand listening to her pain:

“Cry to hear folks chatter/ And I know you cheat./ But right and wrong don’t matter/ When you’re with me, sweet.”

The same authenticity — the sense that the joy of life depends on the pain and loss caused by death — shows in the guitar solos of B.B. King and the comedy of Richard Pryor.

In all these cases, the quality that transcends pop is soaked thoroughly into the sound. Like a hologram, which you can scissor into many pieces and each contains the whole image, you can slice up a Ray Charles song or a B.B. King guitar lick and the whole of humanity is in every sliver, complete and undiminished.

When you hear something so human, you recognize instantly its status as art. Most pop is merely commercial and no matter how catchy the tune, it comes and goes quickly, with no more lasting influence than yesterday’s newspaper.

In an age that likes to downplay the difference between high and low art, it is important to recognize the difference. Too often, anyone who makes that suggestion is accused of being a snob and an elitist.

But there is a difference between those things which entertain us and those which make us recognize and feel our own humanity, that open us up to the wider world of thoughts and emotions. And despite the reigning egalitarianism, the one has more value than the other.

A snob is someone who believes one thing is better than another for the wrong reasons. But what do you call it if you recognize the right reasons?

A snob believes that money or birth or style makes one form of art better than another. But it is not money or style that makes something better.

It is quality, authenticity, genuineness: Mozart at his best had it and Ray Charles at his best has it. Style has nothing to do with it.

Which is why Ray Charles could sing country and Western, and why Willie Nelson could sing Stardust.

For style is merely the vessel the humanity pours into. And it is the humanity that makes it art, not the style.

TV is an odd medium, that is, if it is a medium at all.

I have been trying to understand my dissatisfaction with the box, and come to terms with why, despite watching regularly, I feel so empty afterwards.

First of all, it isn’t a question of the quality of the programing. While it is unarguably true that most of the stuff on TV is dreck, there are examples of quality. The Simpsons, for instance — which I think will eventually be compared quite favorably with Shakespeare — or Book Notes on C-Span.

Yet, when I turn off the box, I feel like I have just eaten a boxcar of Cheez Doodles, and am more than a little queasy.

And I need a spiritual Bromo Seltzer.

One of the issues is, I believe, that TV is different from other media. If TV is a medium, it is one that is already twice-removed. That is, television isn’t, in itself, a medium for ideas or images — a conduit for content — but a medium of media.

That is, a play can be seen as a medium for words and ideas, but television is a medium for plays: What we talk about when we discuss television isn’t TV, but the plays, or discussions, or documentaries, or stand-up routines that are carried by TV.

In that sense, it is theater we are critiquing when we complain about bad sitcoms, not TV per se.

Almost anything that appears on TV is really some other medium, carried at the second remove, over the airwaves and into our houses.

So, complaining about the quality of TV shows isn’t really complaining about TV at all.

This second-handedness of television is one of the things that bothers me about it most.

I see the consequences of a TV-based culture constantly, when young people quote old sitcoms for wisdom — as if the Brady Bunch were relevant to anything.

And now, with the pervasiveness of the box in our culture, we look to the second-hand image in preference to the the actuality: Nothing is accepted as real until it is validated by being shown on TV.

Like a bunch of people at a bar in New York, when a car crash occurs right outside and the customers watch the wreck on the TV over the bar, covered from local TV helicopter cameras, rather than look out the window. An image on TV now seems more real than the actuality.

Animals are what appear on TV nature programs. Police are what they see on Cops. And movies are what they see on HBO.

This second-handedness of television means that audiences often make no distinction between seeing a movie in a theater and seeing it on a 26-inch screen.

What is missing is the actuality of the event: For animals smell; their fur has a feel under the hand that TV cannot give us.

Cops spend a lot of time filling in paper work and waiting for court appearances; TV requires the chase.

And seeing a movie includes the visual density of a projected image, which is gutted by the low-resolution of a TV screen.

We build our lives from our experience. If our experience is already second-hand, our lives cannot be fully realized. There is a “nowness” — an actuality and presence — to real experience that cannot be duplicated on the box. I worry that educators show videos of cows in class of rather than taking their students on a class trip to a dairy. What you learn from the video is second hand. What you learn from an actual dairy enters that deep well of experience we can draw from for the rest of our lives. What is lost on TV is that 360-degreeness and three-dimensionality — the smell and grit — of reality.

It also, by its rapid image editing, cuts us off from the important possibility of learning through slow accumulation of detail. In TV, nothing is slow, and there is no detail, only a quick wash of effect.

So, if television cannot give us a meaningful experience, and is only a second-hand medium of media, giving us images of theater and music, what is essentially television?

What can we say about television that isn’t mistakenly critiquing the drama, or the discussion, or the music that is conducted through the box into our living rooms?

This, in addition to the medium’s second-handedness, is what really bothers me:

The actual TV-ness of the box can only accomplish two things.

TV can only make something look attractive, or make it look repellent.

The images that TV gives us cannot make political arguments, it cannot discuss issues, it cannot weigh difficult moral questions. That is the lesson we first widely learned during the Nixon-Kennedy television debates, when Kennedy smiled and Nixon glowered. Many listening to the debates on radio assumed Nixon had won. But on TV, Kennedy glowed and the box gleamed in response.

As a result, political campaigns now rely not on actual debate — no one who really understands the term believes the so-called TV debates are anything of the sort — but the presentation of images making “our guy” look heroic and “their guy” look like an oaf.

And this leads to the natural result of TV’s “attraction-repulsion” duality.

The only thing on television that naturally belongs there — as drama naturally belongs in the theater, or as political debate naturally belongs in the town hall — is advertising.

Television is ultimately the final and natural home of the commercial, that glossy dreamlike presentation of images of the libido, really well lit and set into a mythology of personal gratification.

In one way of defining the term, television is at its heart pornographic.

I don’t mean that it is filled with sexual imagery, but that the pictures on television short circuit complex reactions and substitute simple desire. Television is the ultimate “me see, me want” medium. It makes us want to possess those images it glamorizes. We are not meant to think about, feel deeply or discuss the ideas, but only to want the images. That is what I call pornographic.

TV suppresses the best in our natures and substitutes covetousness. We want the splash of the sports beverage, we want the wind rushing through our hair as we see the actor drive the SUV through the Tetons. We want the shiny hair, or the woman possessing the shiny hair.

In this, the programing is not substantially different from the commercials: We want the smiles and lifestyles of our TV Friends, and the clothes they wear on Sex and the City.

It is a world of pure fantasy, devoid of consequences, complexity or depth. It creates a world in which everything is an “it,” in the terms of Martin Buber.

That objectification of the world is the bottom line on pornography. And when we combine the problem of TV’s pornographic essence with its displacing actual experience, we wind up with a deep metaphysical tummy ache.

Which is why I really need a Bromo.

As I have become older, I have begun to think that the problem of color is primarily a linguistic one.

Color a problem? It seems like one of the clearest, most obvious of phenomena. We all see it: At least, we all stop at stop signs. We know red.

Or think we do.

Artists know about color, certainly. They know the primary colors of red, blue and yellow, and the secondary and tertiary colors they can mix.

Physicists know about color, too. They know about wavelengths of the electromagnetic spectrum and how one tiny segment of this huge megaband of waves can be perceived visually, from the longer wavelengths of red to the shorter, buzzier ones of blue.

But these two knowledges don’t agree. They are relativity and quantum mechanics.

Further, a printer will tell you that the primary colors are not red, blue and yellow, but cyan, magenta and yellow. And a television technician will tell you that the primary colors are blue, green and red.

What gives?

Perceptual psychologists and neuroscientists are still working on the problem of color. The first and most significant problem is that, realistically speaking, color does not exist. That is, what we see when we look at a tomato or an apple, that sensuous red we ascribe to the object, does not have an objective reality. It is a subjective additive that our brains give it so we may make sense of what we see. In evolutionary terms, we use color to know when fruit is ripe.

(Interestingly, it seems as if evolution continues to work in the human species and, as most people now buy food from markets rather than foraging for them, we may be losing our ability to distinguish reds and greens. The incidence of red-green color blindness is growing, and eventually, we may all share it.)

A simple view of how color vision works would seem to make sense of it all. In our eyes, on our retinas, are light-sensitive cells — we call them cones — that are alternately triggered by red, green and blue wavelengths of light, and those signals are transmitted to our brains, where they are synthesized into little color pictures of what we are looking at.

Unfortunately, this isn’t an accurate version of what happens.

First problem is that the cones are not discretely sensitive to red, green and blue. There is considerable overlap, as seen in this graph.

Second, the color we perceive is not always related to wavelength. Consider yellow. There is, of course, a yellow wavelength of light, and we see that wavelength because yellow light tickles both the green and red sensors in our eyes, and we blend them together in our brain to make yellow. The problem is that if something has no yellow wavelengths at all, but manages to tickle both the red and green sensors, we still see yellow. No yellow light at all, but still, we see yellow.

And consider magenta. It is a color that does not exist in the natural spectrum. There is no magenta wavelength of light. But if an object reflects both red and blue wavelengths of light, we are rewarded by the mental sensation of the hue we call magenta. No wavelength at all, but still, there is color.

So, color cannot simply be a mental recording of wavelength. Most of the colors we perceive are mixtures of other wavelengths according us the pleasant and often useful sensation of color, but without any strict accordance to the laws of physics.

What is more, current research on vision tells us that what we synthesize in our brains is not a twining of the three signals from the three types of cone, but rather a group of oppositions worked out by the brain.

Along with cones in the retina are the rods, which are tasked with the registration of light and dark — commonly called black and white. The electrical signals that are sent to our brain to be analyzed are first, the opposition of lights and darks, second the opposition of blues and yellows — which are the colors most other mammals work with — and thirdly, the addition we got from our primate ancestors, the ability to analyze the opposition of red and green.

So, as we now think it to be, the signals sent to our brains give us black-white, blue-yellow, and red-green.

Perhaps, then, what we call primary colors should be black, white, blue, yellow, red and green. That would make sense.

They are all simple names for colors that have clear identities. Everyone knows what green is, or blue.

Or do they? Here’s where the linguistic part comes in.

In English, for instance, there are other color names that have a similar direct and clear determination of hue. Orange and pink, for instance. Brown and purple. Simple names for hues we recognize has having distinct identities. And just because we speak English, we take our name-markers for a simple one-to-one description of reality.

But hold on: Other languages organize colors differently.

Consider Russian, where what we call light blue has one name and what we call simply a darker shade of blue, has another name. Siniy and Goluboy. The distinction is the same as we make between red and pink. We hold them to be distinct colors, not merely shades of the same color. In Russian, that distinction is accorded to the blues.

Or take Japanese, where all of blues and greens are covered by the single word, ai. There is the ai of the sky and the ai of the trees below it. It is all ai.

There are languages in which the surface reflectivity of an object changes its color name. We have that in English, where a metallic version of grey is called silver, and a version of yellow that maintains specular reflections is called gold.

“In certain languages there are names for colors that are descriptive in terms of surface, as a wet black or a dry black,” says painter Henry Leo Schoebel, whose paintings are all about the sheen of their surface.

“There is a big difference between a box merely painted black with glossy house paint, and a Japanese lacquer box. The lacquer is a blacker black.”

There are academic arguments going on all the time over whether the names of colors are universal or are culturally distinct. I’m not getting into that, except to say that both seem to be true. But colors are universal in the sense that everyone knows red is red and would not be confused with, say, blue. But when we say red, we don’t always mean the same thing.

“If one says ‘red’ and there are 50 people listening, it can be expected that there will be 50 different reds in their minds,” painter and color theorist Josef Albers once wrote.

The Zuni language classes yellow and orange together, which means that once they have coded it in language, say, as if to tell a friend what they have seen, the friend decodes the word into his trove of experience and comes up with something quite different. It may be orange; it may be yellow. That is a distinction that our language makes, but his does not.

The same as Russian separates siniy and goluboy and ours does not.

The tomato is a whole lot more close to the orange end of the red spectrum and the stop sign is closer to the magenta end of the red spectrum. Yet we call both red, and if we tell a friend about something we have seen and say it is red, the friend will decode the term the same inexact way the Zuni friend decodes orange-yellow.

And outside the limits of language, color is something we know from its embodiment, not from its abstraction. That is why so many secondary names for tints and hues are actually the names of those items who bear those colors, such as lavender, fuchsia, turquoise, teal, olive, coral, puce, salmon.

And it’s why painters cannot use tubes of paint called “red,” “green,” or “blue,” but instead rely on vermilion, phalo green or ultramarine. Pigments are not abstractions, but physical substances, and they differ in effect, hue and their properties of admixture. Mix blue and yellow to get green? Which blue? Which yellow?

This physicality of pigment also means that an artist’s colors don’t behave according to a neat color theory. Each pigment has its own idiosyncrasies and personality.

“For reds, I use acra violet, cadmium-red light, red oxide,” painter Anne Coe says. “Phthalo blue, ultramarine and cobalt for blues, cadmium-yellow light and medium and then yellow ocher.

“You do have to have a violet. It’s hard to mix a violet.

“And you can’t put black into a cadmium-yellow light: It turns green.”

So much for color theory.

In the end, colors are as individual as people, and any color theory is a compromise, fudging this or that for coherence. There is no theoretical certainty in color, and in the end, we have to admit that each of dozens — even thousands — of colors has its distinct identity and each pigment its distinct properties.

And so, I have given up on color theory.

What is the single greatest enemy of art?

What one thing more than any other manages to sabotage the efforts of the artist?

It’s not lack of money; it’s not the bourgeois tastes of the masses; it’s not cultural victimization.

The one great enemy of art is talent.

Well, maybe not talent, exactly, but the satisfaction of having talent and the willingness to settle for what talent gives you.

Being self-satisfied always makes the artist willing to settle for less.

I remember artist Frederick Sommer stating his case quite clearly: “Why would you ever do anything less well than you can?”

If you are going to attempt something, he says, you had better give it everything you have.

And talent simply isn’t everything you have.

Talent is like beauty; it comes with the genes. To rely on beauty to get you through the world is a shallow and unworthy existence. To rely on talent is equally unworthy.

Not that great artists aren’t talented. Talent is a gift, surely. But no artist ever produced anything great BECAUSE he had talent. In fact, many artists have had to work extremely hard to rise above their talent.

Talent, as I am using the word, is facility. It is the ease with which an artist — or poet or playwright or composer — can create something that looks like what society approves of as good art.

In the visual arts, this is often seen first in the ability to draw.

We think of Degas or Ingres or Picasso as great draftsmen, able to capture reality with the quick flick of a pencil, clean and unfussy.

But there are two things I want to say about that:

First, much of what passes for realistic drawing is in fact not realistic but conventional. We, the inheritors of the European traditions of art, have come to expect our art to LOOK a certain way, and when someone can produce that look, we mistake it for verisimilitude.

I don’t want to go into this too deeply here, because it will get me off the track.

But Suffice it to say, good draftsmanship, of this variety, says more about the acculturation of the artist than the artist’s engagement with the world.

And second, and more important, there have been great artists with little talent, at least, little of what we conventionally call talent.

I think of two in particular:

Paul Cezanne and Vincent van Gogh.

Both are among the greatest achievers Western art has ever known, and both did it despite having only middling talent. Consider Vincent’s drawing of a carpenter (above), from 1880. Almost childish.

One looks through van Gogh’s notebooks and looks in vain for facility. Nothing came easy for the Dutchman. The books are full of false starts and erasures. The pencil lines pile up on themselves in corrections and rethinkings.

One looks at Cezanne’s drawings and sees the work any moderately talented high school student could match, even exceed.

But the genius of both — indeed the genius of all great artists, even those like Degas who possessed talent out the wazoo — is their sense of commitment. They are committed unto death each time they essay a drawing.

It is always the depth of commitment that makes the artist. Talent helps, but talent is only a tool.

What do I mean by commitment?

I am talking about the ability to concentrate as if your life depended on it: To look at the world and steer your pencil as if you were defusing a bomb.

If the world falls away and only your task is real, you have made it to the first level.

But even that is only the first level.

Hey, we’re still only talking about drawing here. Drawings are wonderful — in many ways, I enjoy drawings more than I do paintings, just as I often enjoy symphony rehearsals more than concerts or dance rehearsals more than recitals.

What, then, takes us beyond the “preparatory drawing” and into the bigger, more important form.

AMBITION.

I don’t mean the worldly ambition of making money or reputation. Those are altogether unworthy ambitions, and rather small ambitions at that. Anyone with TALENT can achieve those ends.

No, by ambition, I mean the grand biting off more than you can chew. I mean always working at the outer edges of your talent, attempting to take off into the stratosphere.

When one looks at the artists who were truly great, and let’s name a few:

Besides van Gogh and Cezanne, there is Manet, Goya, Michelangelo, Raphael, Poussin, Picasso, Matisse, Rembrandt, Durer, Turner, Botticelli, Titian.

Each one of these had ambition to paint more than pretty pictures. Each attempted to wrestle with some aspect of reality and bring it into submission so we could see it, test it and comprehend it.

Art that does less, I have said before, is wallpaper.

So where does that leave talent?

Well, the same place as that other great bugaboo of art, “creativity.”

You have no idea how ill I get when I hear someone talking about creativity as if it were a good thing.

Creativity is, like talent, an excuse for laziness. An excuse to accept easy, slovenly or simple-minded art as “good enough”; It is not.

Creativity is almost always used as such an excuse when we hear it. It is one of those words that should immediately make you suspicious. People who really understand what is going on in art don’t rely on such a word. It is only for poseurs and dilettantes.

Creativity is the merest baby steps of art. It is sure nothing to be proud of. Anyone is capable of creativity. It is just looking for a new way to join two sticks.

If it isn’t joined to a critical mind that can then judge whether the new way these sticks are joined is or is not a BETTER way, it is worthless.

Sure, it can be fun. So can a cross-word puzzle. But that don’t make it art.

Art is hard. If it isn’t, it isn’t worth doing.

If you are comfortable with what you are doing, it isn’t worth doing.

If you know HOW to do what you are doing, it isn’t worth doing.

That reminds me of another thing Frederick Sommer says: “I never read a book I understand,” he says. “If I already understand it, why am I wasting my time chewing this stuff twice.”

We need to dive into those things we don’t understand and think and feel as hard as we can, making sense of it. Then we have accomplished something.

Creativity: I leave that to new-age wannabes, where nothing of real worth is possible.

Shall we find yet another popular bugaboo? How about spontaneity?

Did Milton create “Paradise Lost” spontaneously?

Real art comes as the result of great labor.

It is the highly polished and refined gem that is worked and reworked, thought through and re-thought through.

No great art comes spontaneously.

Think of the great Sumi paintings of Japan, that are made with a few deft strokes of brush and ink, with no erasings possible, no “redos.”

A great Zen painter can only produce such work after years of great labor. It doesn’t come without effort.

But go downtown to any poetry slam: You will find piles of really wretched poetry written by young people who think that every word they utter is sacred. That spontaneity is somehow Holy.

Jack Kerouac espoused this view. But his best books, and especially “On the Road,” were rewritten heavily. It is later in his career that he started writing genuinely “spontaneous prose,” as he called it. And those books are awful.

His friend, Allen Ginsberg, likewise liked to say, “First thought, best thought.” But all his best writing, from “Howl” to “Kaddish” exists in variorum editions that show how much they were reworked and rewritten.

“First thought, best thought,”my ass.

The secret of great writing is rewriting, someone once said. And that is certainly correct. The really proper word doesn’t always come the first time round, and then the greater structure of a piece must often be carpented and finessed.

Ask James Joyce, who spent 11 years writing “Ulysses.”

I see it all around me in art galleries. Artists want a pat on the back, as they got when they were children and their mothers patted them for drawing such a nice doggie and horsie.

That is good for children. It is insufficient for working artists.

It is a struggle, and should be a struggle.

Art isn’t easy and it wasn’t meant to be. No human endeavor worth pursuing is easy.

 

 

Finding directions is a trial for some. My wife — and no, this isn’t a wife joke — has trouble understanding the compass points. We lived just a few blocks south of Northern Avenue.

“How can it be south?” she asked. “If it’s Northern Avenue, it must be north.”

Another time I asked her, “Which is further west, California or Hawaii?”

“From here?” she asked.

Such answers dazzle me, because I have a preternatural sense of direction. I don’t take credit for this; I was born this way, the same way some people are born with a talent for music or with a photogenic face. When  such things were handed out, what I got was a sense of direction.

I have surprised even myself at times. When I was in third grade, the class took a bus trip to visit a nature preserve in northern New Jersey. Some 30 years later, revisiting my old haunts, I decided to find the nature preserve and drove right to it, no false turns or missed clues.

A few years ago, driving through Ontario, I saw a side street that looked as if it might lead to the motel my family had stayed at during a vacation we had made when I was in the 10th grade. I turned and found the motel, very distinct because in addition to the usual motel units, it had a two story stucco house attached to it.

I have wondered many a time about this sense of direction and tried to figure out its mechanism. For many, when they take or give directions, they use a kind of linear description: Go three blocks, turn left at the church, go another two blocks and turn right at the gas station, continue for four miles and look for a house with a red SUV in the driveway.

For those people, they are always traveling in a straight line forward. They may take a turn at a landmark, but they think of themselves as continuing to face forward and move in that conceptual straight line.

For me, and those like me, however, there is a starting point and an ending point and they remain aligned, as with the stars, or on a map, and I can negotiate any number of turns or diversions and never lose track of that map pin stuck into that place. It is as if I can always “see” them there, no matter how many buildings or miles intervene.

The mechanism for this I have not previously much thought about, but now, I have come across at least one aspect of what makes a sense of direction. It begins with one’s autonomic nervous system.

With eyes closed, I can touch my fingertips together. This is no great act; most anyone can do it. But doing it requires that I rely on my inner sense of where my body is. I know, spatially, where I extend to — i.e. the limits of my palpable being. Even without seeing, I can sense where my skin is and where I fill that sack of skin. We all, to greater and lesser extent, have that sense.

It is true that our “sense of ourselves” isn’t always accurate, in fact, it is grossly distorted — hands, tongue and head feel much bigger than they actually are — at least by the measuring tape.

It is not really the body which is distorted, but the “space” which our body fills. We move more precisely concerning those areas which seem large out of proportion. We can distinguish two very close points on our tongue yet cannot tell a much greater distance on our backs. It is as though we occupy, psychologically, a relativistic space — an Einsteinian universe with warps and curves in its substance. It is not just that my head feels larger than my back, it is that the space occupied by my head is larger — the increments of that space feel evenly distributed across my body and since there are more of those increments in my head or hands, they feel correspondingly bigger. The squares in the graph paper I might use to chart my body are drawn with warped lines.

A similar sense of position is felt in a room. The space of the room exists almost as a solid or “anti-solid” in which I determine my latitude and longitude. I feel closer to one wall than another. This is not merely a measurable phenomenon — I feel it.

In fact, my “felt being” includes not only my autonomic sense of myself, but also my sense of the walls almost as a projection of my skin. I can feel everything that goes on inside the room — accurately place proportions (is it 2/3 of the way across the room? 4/5? 5/6?) But what is outside the room is normally beyond thought, and unless actively thought about, does not exist.

Of course, I can go outside and look — but then I only change one room for a larger — the outdoors.

Sitting at my desk, I can throw a wadded up first-draft over my shoulder and have it land at the base of the far wall without looking — my autonomic sense of my position in the room tells me just how far to throw it.

The room, in a real sense, is just a part of me.

If I close my eyes and turn my head, I still know my orientation in the room; I still know the directions of the four walls.

This same sense of orientation exists when driving or parking a car. I know by pure feel how far behind me the car extends. Even though I cannot see the back bumper, I nevertheless don’t crunch into the car behind me when backing into a parking spot.

In my car, I have become a centaur, and my automotive rump is just as real as my carnal one.

I believe that this same autonomic sense, projected to a vaster landscape, is the root of a good sense of direction.

A person with a good sense of direction translates these instructions into spatial understanding. If he forgets the written instructions at home, he may still find the house with the red van.

The person without this sense of spatial orientation is lost without the step-by-step.

My brother has told me that when he is a passenger in a car, it is as though he entered an elevator and when the car finally stops, the door opens and he gets out. For me, with the spatial sense, I always feel, not only when the car turns right or left, not only how much off “straight” my internal gyroscope has been turned, but also how the space — the very large space — I am driving through has been altered, much the same way as I know — feel — my changed orientation in a room when I walk from one side to the other.

As your projected body limits change as you move from small room to larger, so the land I feel oriented to changes as I enter different kinds of terrain. And as I leave territory I am familiar with and enter that which is new. In a new city, my territory — my personal space — can be very small indeed: a few square blocks. But at home, my personal orientation easily covers 30 miles or more. And accurately.

At times, traveling through Montana or Nebraska, I feel secure in a “felt space” of hundreds of miles.

This space that I feel can be just as distorted as my simple body sense. Often the nearest 200 yards seem the biggest, like my tongue. The direction I am going in seems larger and more clearly defined than the space at 10 o’clock or 2 o’clock — which space is useless for my travels.

At any given moment, I can point to New York or Lake Superior. I don’t usually have to think first, which direction is North and then imagine in my head a map of the U.S. and figure from that where the Big Apple is. I am always aware of where north is, and east or south or northwest. I point as quickly to Los Angeles — thousands of miles away — as automatically as if asked to point to the front yard of my home from its living room.

In a sense, the map of the U.S. exists constantly inside my head and I know, without actively concerning myself with it, where on that map I am, where that little arrow is with the tag: “You are here.”

That map is, as in Steven Wright’s joke, “life size.”

I can visualize it spreading out and covering the actual land. In a sense, I drive on that life-size map, and never have to fold it up and stuff it in the glove compartment.

Envoi: When European mapmakers first began orienting their maps with north at the top, it was a new convention. And they found a neat little glyph to designate the directions. For them, there was a four-way divide: North, East, South and West. It could be subdivided into northeast or southwest, and further turned into such arcane weather forecast terms as east-northeast or south-southwest. A good glyph can include all these.

But I am reminded that not all cultures thought in terms of the four cardinal compass points. Many American Indian cultures had six directions, not four. They included North, East, South and West, but also Up and Down. Surely up and down deserve as much respect as north and south. They are as real, and work the same way as a directional framework, with ourselves always at the crossing of the moveable axes.

I am now in the habit of considering that there are really seven cardinal points, because while the original six compass points extend outward from the axis mundi of ourselves, there is the seventh, which is Inward. If, as I believe, all directions must needs reference our individual positions on the grid of the planet, the central point inward is as meaningful as those star rays outward from us. It is a two-way street.

So: North; South; East; West; Up; Down; and In — where it all happens.