It turns out that the ETS has a whole research division, including researchers in natural language processing who come up with stuff like e-rater and other machine essay graders, and that they publish about these systems.
According to the latest system description paper:
The feature set used with e-rater V.2 include measures of grammar, usage, mechanics, style, organization, development, lexical complexity, and prompt-specific vocabulary usage.
E-rater is part of Criterion, a web-based service that provides students with instant scoring and feedback on their submitted essays. Criterion has a number of writing analysis tools whose output form the feature vector used by e-rater. The score is a simple weighted average of the feature values.
One noteworthy detail is that in determining the parameters to use for this model, e-rater ecshews exclusively statistical machine learning (optimization) approaches in favor of allowing judgmental control, for reasons of control (to avoid unintentional skew and other undesirable statistical effects) and transparency (to make the system easier to understand and explain).
It would be interesting to see how straightforward it is to game e-rater, given the above information and access to the implementation in Criterion.