A good rule of thumb is people complaining about everyone else usually illustrate their own guilt more than that of others. A recent case in point is a Tweet by Wharton professor Nina Strohminger that generated a lot of news coverage.
I asked Wharton students what they thought the average American worker makes per year and 25% of them thought it was over six figures. One of them thought it was $800k. Really not sure what to make of this (The real number is $45k)
It’s disappointing that a business school professor would ask such an ambiguous question. “Average” is usually taken as “mean,” but asking about “the average American” suggests a median. Does “worker” mean we should include only wage income, or is it asking about total income? And does “worker” include part-time and unpaid work or only full-time employed people in the regular labor force?
None of those interpretations give numbers very close to what Strohminger claims is the “real number” of $45,000. If I had to pick the most natural interpretation of the question, I would use the median earnings of full-time workers in 2021 (the most recent data available from the Bureau of Labor Statistics, $51,882, then add 35% for a typical benefit package to get about $70,000.
Now, of course it’s possible that students answering $100,000 were deluded, as Strohminger and most commentators seem to assume. It’s easy to explain surprising observations by “everyone but me is stupid,” but it rarely leads to learning. $100,000 is a reasonable guess at the mean household income in the US (the most recent available figure is $114,962, down from 2019 due to the pandemic and probably up quite a bit in 2021 due to stimulus and inflation). Asking students who gave high answers how they interpreted the question would seem to be more productive than trashing Wharton students on twitter for not reading the professor’s mind.
I grant that $800,000 is hard to understand, but not for the reason I suspect most people think. It’s common knowledge that there are many people with incomes in the billions. Even if you know the median worker’s salary is around $70,000, it’s not statistically implausible that the mean worker earns far more—even $800,000. You can’t estimate extreme values by looking at typical observations.
The problem with $800,000 is anyone who knows US national income is in the neighborhood of $20 trillion should figure that it could only support around 25 million workers earning $800,000, and there have to be more than 25 million workers among a population of well over 300 million. My guess is the person answering $800,000 wasn’t taking the question seriously, or didn’t have even ballpark ideas about US national income and population, or accidentally added a zero, or didn’t bother to do the math.
While I don’t think this experiment tells us anything about Wharton students’ knowledge of economic conditions, it does relate to two points I stress in my own classes. First is that most students who do not support themselves, in my experience, have no idea how much their own family earns, nor how the money is allocated to taxes, rent, food and other expenses. Those who do know are embarrassed to discuss it, whether the numbers are high, median or low. Students are far more open about their sex lives, drug use and health problems than their family finances; and far more knowledgeable as well.
This presents problems when I’m advising students on career choices. How can a student choose between an offer of $40,000/year in San Francisco and $45,000 in Minneapolis (quadruple those numbers for Wharton MBAs) without some idea of what kind of living standard those pre-tax salaries will support? How can a student know if she needs a top professional salary to live in comfortable style, or if she can be happy on what public-interest jobs pay? How can college-educated citizens opine on economic policy without knowing how to relate money to living standards?
The other issue is our education system puts too much emphasis on looking things up versus thinking. Tests generally reward students either for parroting the approved “real” answers, as Strohminger prefers, or expressing some internal subjective truth. You can’t make profitable trades based on things anyone can look up, and your internal subjective truth is useless. You’re not likely to change the world or make breakthroughs this way in non-financial fields either. Things you look up are often wrong, and nearly always rely on assumptions and definitions you don’t know. Thinking for yourself, and then gathering reliable data you do understand, produces much more robust and useful knowledge.
The great physicist Enrico Fermi was famous for asking his students “Fermi problems.” A classic example is, “How many piano tuners are there in Chicago?” A first reaction is often, “I have no idea,” but some reflection shows that you do in fact have some ability to hazard a plausible range. You can reason from the population of Chicago, the fraction of homes with pianos, the frequency pianos are tuned, the number of pianos one tuner can service. You can think about whether you know or have ever met a piano tuner. You can recall if you’ve ever seen piano tuning businesses, or if Home Depot has an aisle for piano-tuning supplies.
Once you start thinking, it’s addicting. You realize you need some definitions—Only full-time professional tuners or should you include part-timers and people who tune as part of other duties? Is it tuners who live in Chicago, or those who tune there? Chicago city limits or metropolitan area? How about harpsichords and clavichords? Do electric keyboards need to be tuned? These kinds of questions can lead to specific research much more reliable than simply entering “number piano tuners Chicago” in Google. (The proprietary trading firm Jane Street invented a game with Fermi-type questions, Estimathon, that I like to use in the classroom.)
There aren’t many trade theses that depend on the number of piano tuners in Chicago, nor is it a key metric in any other field I can think of. But practice in putting plausible estimation ranges on things using what you actually know or can determine accurately, and reasoning rather than parroting or introspecting, should be a big part of education if we want graduates who think.
My guess is most of the answers Strohminger got were reasonable interpretations of her ambiguous question—it’s not that Wharton students need better information, they need more thoughtful professors, ones who learn from their students rather than making fun of them on twitter. But I am not surprised she also got some wildly incorrect answers based on any interpretation—like $800,000. I suspect she would get some equally or more absurd answers if she asked her students to estimate mean global temperature, the amount of water used in a typical shower or what average grocery store profit margins are. Even elite schools like Wharton typically don’t teach their students to reason from what they observe versus looking things up or expressing what they feel.