Not to be outdone by the ‘hard sciences,’ since the late 19th century psychology has been dominated by positivism–the belief that the mind can be understood by science, using experiment and measurement to prove theories, which are then accepted as laws.
Unlike medically trained psychiatrists who are expected to relieve mental illnesses, psychologists grew their discipline in academia, where ‘scientific’ was the essential requirement for respected and authoritative conference papers and journal publications–the premium fuel of successful academic careers.
But doing science in academia requires money for psychologists no less than for physicists and chemists and geologists. World War II brought an unprecedented infusion of money into university science departments and research laboratories, while the industrial and technological expansion that followed ensured their growth as producers of graduate students and post-docs in mathematics, physics, and other natural sciences. Psychology was able to enjoy a special niche, thanks to the military services which scrambled to populate their ranks with mentally as well as physically fit inductees.
While historians, political scientists, and philosophers had to content themselves with survival in liberal arts colleges and more modest resources in university graduate schools, psychologists were able to take advantage of a post-War flood of shareholder and taxpayer dollars hunting for sure ways to increase productivity in both private and public sector organizations struggling with the managerial demands of a rapidly expanded economy and federal sector. Increasing productivity required, among other things, skill in identifying and bending compliant subordinates to the various tasks required of them.
Enter the human resources (HR) management enterprise, joined by an army of consultants feeding on human resource managers’ desire for ‘expertise’ to guarantee their own success. Happily for HR managers, psychologists–scientists of human behavior–were ready. Thus the many memorable cultural phenomena of the 1960s’ included the successful introduction into organizational management of schemes for the classification, measurement, and training of personalities. If you can’t define and measure it, you can’t manage it, went (and still goes) the mantra.
Undoubtedly the most familiar of these is the Myers-Briggs personality typology and test developed by Katharine Cook Briggs and Isabel Briggs Myers in the 1940s and 1950s. Loosely derived from Carl Gustav Jung’s Psychological Types (1923), Myers-Briggs posits four sets of opposite personality ‘preferences’ (Extraversion [sic]/Introversion, Sensing/Intuition, Thinking/Feeling, and Judgment/Perception), which in various combinations can produce 16 different personality types. The Educational Testing Service began to market the Myers-Briggs Type Indicator manual in 1962 after which it was widely adopted in the human resources and management training industries. (Today Internet “match-making” sites such as eHarmony.com have enabled many more millions to sample the new-old ‘science’ of characterizing personalities.)
One has to experience the Myers-Briggs test to appreciate its fallacious appeal. Relying as it does on individuals’ self-characterizations at a given moment in time, it is certain to produce distortions, beginning with the test subject’s own probably incomplete self-image. Nor can one verify the test results under identical (constant) and controlled conditions. If the test were only a waste of tax- and shareholder dollars, that might be bad enough. But it can also harm both the individuals and organizations who believe in and use it.
Individuals might accept as accurate the test’s categorization of themselves, and thus feel obliged to ‘correct’ their own otherwise spontaneous, natural, and healthy attitudes and behavior. Meanwhile, since the test isolates intellectual activity as a ‘preference’ of introverts who prefer to spend time alone, critically thoughtful persons so desperately needed at corporate conference tables are less likely to be welcome in organizations that venerate ‘team players’ and fear gadflies.
Another product of the positivist ‘measure and manage’ school of human behavior is the test of ‘creativity.’ Also emerging on the educational scene during the 1960’s, creativity testing’s most notable proponent was Ellis Paul Torrance. ‘Creativity’ shares with ‘critical thinking skills’ the status of that elusive thing that everybody reveres, especially in educational and organizational cultures uneasy with the challenge of discriminating between good ideas and dumb ideas, and that confuse information with knowledge.
If we can decompose creativity into a standardized and measurable set of tasks, then scientific experts can tell us who and what is ‘creative,’ and develop proper techniques for promoting creativity. Enthusiasm for personality and creativity tests is (and remains) part of a larger cultural current that prefers technique and process driven solutions–which can be learned from a manual–to pesky examinations of content, which might require us to tangle with more knowledgeable or better educated persons, the enemies of ordinary people like ourselves.
As recently reported in Newsweek magazine, “the accepted definition of creativity is production of something original and useful, and that’s what’s reflected in the tests.” Creativity, in this particular version of positivism, is necessarily defined to exclude the arts, which seem to resist scientific capture. Instead, creativity consists of (1) divergent thinking, or “generating many unique ideas,” and (2) convergent thinking, “or combining those ideas into the best result.” Never mind the breathtaking hubris and ignorance of human history necessary to suppose that you or I on a given day, and at a given place, can recognize the truly original, “many unique ideas,” and appreciate the combination of ideas that produce “the best result.”
Redefining creativity to exclude the arts reminds one of Richard Hofstadter’s classic account of the recurrent ebbing and flowing in this country for nearly two hundred years of hostility to the arts, “high brow” literature, and music. In the education establishment in particular, advocates of democratic and useful learning argued that the study of fine art, classical music, and literature, was time wasted on the impractical diversions of elites ill suited to adapt, not to mention contribute, to America’s commercial culture.
This, however, should come as no surprise today in a society whose prevailing preoccupations preclude an appreciation of the necessity of varied languages to the adequate furnishing of an educated mind–not to mention negotiating a global society. No less than “foreign” languages, the visual arts, music and the finest writing, are historically developed languages that enable us to share in the totality of human experience and possibility over time. The mentality that hustles creativity tests has little or nothing to teach us about the mystery, miracle and fortune–the concatenations of events, forces, and sheer timing–that shape the todays and tomorrows of all of us.
To learn the content and power of human creativity, absorb the beauty of right proportions from the architecture of William Byrd’s “Westover” (built ca. 1730); hear the depth of grief and dawning of hope in Gregorio Allegri’s (1582-1652) Latin setting of Psalm 51, “Misere Mei Deus;” or begin to glimpse the infinite complexity of human history as it teeters ever on the cusp of chaos in Leo Tolstoy’s War and Peace (1863-69). Thanks to one of the greatest realizations of Francis Bacon’s promise–the democratizing Internet–all of us can hear Allegri’s celestial music performed by many varied voices, or see the perfect repose of the design of Westover. For War and Peace, however, I recommend picking up the book.
For all of this we need no handbooks, no standardized tests. For all of this there are no meaningful measurements, other than the numbers of lives enriched by true human creativity, and thus more capable of enriching the lives of others.
Text Sources: Available on request.