Our friend Julie in the comments emailed me to ask my opinion of a particular editing software package. Because my opinion is less than glowing, I won't name the software. It operates much like all the editing software I've seen, and so my comments could apply to several platforms.
First, I must disclose that I'm biased against the notion that content editing can be performed by mechanical analysis. How can any software analyze for strength and consistency of character? For relative emotional impact of premise and climax? For a reader's potential ability to bond with a character? Can a computer identify theme, motif, and symbol? If it works by analyzing text, how can it analyze subtext?
I would never rely upon software for content editing.
But just for kicks, and just to be sure I wasn't prejudiced against something that might work, I downloaded the software and ran one of my manuscripts through the machine. I used a manuscript under contract with my company, one I know inside out and upside down. I chose this because I wanted a clear understanding of the story and narrative so that the sample analysis would be meaningful. (And no, I won't tell you which manuscript it was. The only relevant point is that I am thoroughly acquainted with it.)
Here's what I learned:
1. How many words per sentence.
I did a spot check of its counting, and found one error. The error came in where a number was used in the text. This is not a big deal, I think, but worth mentioning.
Because it presented the counts in the same order that the sentences appear in the manuscript, this function might be useful for showing where the text might be rhythmically monotonous. Where there was a string of four sentences all with six words each, I checked the manuscript again. The sentences were fine -- two standard SVO constructions, one fragment, and one with an introductory prepositional phrase. Rhythm depends upon more than mere word counts, but still I can see where this tool might be useful.
2. Flagging single word repetitions.
Again, in theory this could be a useful tool. In practice, it has its limits. It flagged the character names as overused, and even reported the exact number of usages to eliminate in order to overcome this objection. Strangely, it did not object to pronouns, so I ran a search in the original manuscript and found that pronouns outnumbered character names by a factor of more than ten. I don't know what to make of that. Perhaps the machine doesn't like proper nouns? (Worth noting: I had to use word to count the pronouns. The fancy editing software doesn't let you select which terms are count-worthy.)
The software didn't let me choose its parameters, so it generated some useless data such as the number of contractions in the manuscript. It was unable to distinguish between legitimate usages of the past progressive tense, but lumped all the "was walking" and "were kissing" moments in with all of the simple past conjugations of to be. Likewise, it counted the number of words ending in -ing without distinguishing between progressive tense participles and present participial phrases. None of that is of much value.
3. Identifying overused phrases.
The software claimed to be able to identify overused phrases. This part of the report, however, did little more than flag certain adverbs like when and where and then and after. The author earned a hearty "Good!" in this part of the report, but I'm not sure why. I suppose it doesn't like adverb phrases.
The other part of this section of the report listed several nouns that it claimed were overused. Seemed odd to include these in the part that was supposed to identify repetitive phrases -- phrases do have more than one word in them, after all. In any case, the nouns it flagged as "repetitive phrases" were appropriately used. This part of the report seems to have little value.
4. Dialogue tags.
The machine had no difficulty scanning the document for the word said. It picked up on some synonyms such as muttered, asked, blurted, and shouted, but missed hissed and snarled. (Alicia, make of that what you will!) It did not identify beats. I routinely strip tags during line edits, sometimes converting them to beats and sometimes eliminating them. This function might be useful if you were doing a search and destroy on tags, but there's a built-in option in word that can do that already.
5. Misused words.
I expected this part of the section to identify misused words. Silly me! What it did was flag every word that has a homonym. To/too/two, pearl/purl, and so on. It left it to me to determine if the correct homonym had been chosen.
6. Spelling errors.
This baffled me. We had no spelling errors according to this report. The manuscript is set in an alternate world, complete with made-up nouns, and none of these triggered a spelling error. Word's built-in spellchecker went cuckoo over this same manuscript, but the special editing software blew right past everything. Makes me wonder if I somehow accidentally turned this function off.
And that was it. No pretense at literary analysis, just a simple word-counting program that counts what it thinks is important. You could potentially use this software to locate certain words you want to change, but I don't see why you would spend money for it. Word (and, I suspect, wordperfect and other word processors) already let you do this easily.
Here's how. This varies a little bit depending on which version of the software you're running, but here's the basic process.
Open up the dialogue box for the "replace" function.
In the "find" field, type something you want to flag. (Said, ing, ly, etc.)
In the "replace" field, type the exact same word.
While the cursor is still in the "replace" field, click the "More" button.
Click "Format" and select "font" from the menu.
Select a nice bright font color like red.
Click on "okay," and then "replace all."
Boom. You just flagged your word. Repeat as needed for every word you want to flag. And then, when you're going through your manuscript for that pesky manual content editing, you won't be able to avoid all your present participles or saids or -ly adverbs.