The Promise and Futility of Lit Journal Sabermetrics


I need to face facts: some literary journals are just out of my league. While it’s a bummer to admit it, I’m not getting anything published in The Paris Review or The New Yorker any time soon. I encourage my undergraduates to submit their fiction to undergrad journals so that they compete with writers of similar experience. After all, undergrad journals exist because it isn’t fair for someone who has been writing seriously for perhaps one year to compete for a spot in a magazine against writers who have been publishing for a decade or more. So here’s the dilemma: I may not be an undergraduate, but I haven’t been publishing for a decade either. Where then, should I submit my work? Every aspiring writer faces this conundrum and our instincts tell us the solution lies with ranking journals. If only we could graph every journal along a continuum from the hardest to get published in, to the easiest. Then we could start at the bottom at work our way up, right?

 

It seems like common sense. However, before you send an email explaining why I am naïve or misguided or stupid, I admit that any actual attempt to rank journals is inherently problematic. The US News & World Report rankings of colleges and universities are bullshit. College football rankings are bullshit. Rankings, in general, are bullshit. Nevertheless, for an aspiring writer, journal rankings could still be useful if only to save us the time and money of submitting to journals where the probability of acceptance is extremely low.

 

Duotrope provides many statistics about acceptance rates and other factors, but the information used to create those statistics is self-reported by Duotrope users. Any first-year statistics student knows this method is irredeemably unscientific. Now that Duotrope is charging members for its services, the data will be even more flawed. (In the interest of full disclosure, I got a C in stats as an undergrad. I appreciate it conceptually, but I’m terrible at math. The class was also online—the first and only time I’ve made that mistake.)

 

On his blog Perpetual Folly, Cliff Garstand ranks journals according to Pushcart Prize wins over the last decade. Borrowing this idea, John Fox on the blog Bookfox ranks journals according to how many of their stories have appeared in The Best American Short Stories anthologies. This is probably the most empirical way to rank journals. (It’s way more objective than college football team rankings!) However, If we compare these two lists side-by-side, we see only a slight overlap at the very top of the list—Ploughshares and Tin House. Otherwise the lists diverge in interesting ways that make averaging them together or drawing confident conclusions difficult. We need more data.

 

Marc Watkins has dedicated an entire blog to ranking journals based upon how many of their stories win prizes or appear in the Best American series, New Stories from the South, and other anthologies. Appropriately, the blog is called The Rankings. It is an interesting project but Marc does not aggregate his findings—he does not take the results from each prize and anthology and average them together.

 

During the 2012 presidential election, we learned that a single public opinion poll, no matter how vigorously scientific it’s supposed to be, is inaccurate. When so many polls predicted the election wrong, it was Nate Silver, aggregating scores of poll results, who got it right. So if we look at any single ranking produced by Garstand, Fox, or Watkins, the results will be misleading.

 

Poet Jeffery Bahr has attempted a Nate Silver-esque ranking of journals by assigning point values to categories such as the size of the publication's press run, the acceptance ratios of submissions each journal receives, the number of occurrences in Best American Poetry, Pushcarts, other anthologies and awards, and the poets who are regularly represented in recent issues. I would be interested to hear a statistician’s analysis of Bahr’s methods. Regardless, Bahr is only looking at poetry. It is an interesting list nonetheless (and an undoubtedly time-consuming endeavor) that is likely useful for fiction and CNF writers too. After all, if a journal’s barrier to entry is high for a poet, it’s probably not easy for a fiction writer either.

Here’s the rub: let’s imagine that we find a sabermetrics wizard and apply his talents to lit journals. How do we test it? Nate Silver’s model was tested on election night. The Oakland Athletics model was tested every game the As played. (What, you didn’t see Moneyball?) But there is no way to test the results of a lit journal ranking. No matter what we do, it’s not science.

 

Ultimately, I think what writers want to know is which journals have a high barrier to entry, which have a moderate barrier to entry, and which have a relatively low barrier to entry. Publication is validation of our work. Moving up the ladder is evidence we are improving. That’s all we want. So far, no one has ranked journals based upon a ratio of pieces published from first-time authors, the total number of pieces published, and the total number of submissions received. That would be the simplest way to calculate a journal’s barrier to entry. Everything else that constitutes the conventional wisdom of writers, editors, publishers, and agents—prize wins, anthology appearances, et cetera—is subjective and flawed.

— 3 March 2013