While the technology has been developed primarily by companies in proprietary settings, there has been a new focus on improving it through open-source platforms. New players in the market, such as the startup venture LightSide and edX , the nonprofit enterprise started by Harvard University and the Massachusetts Institute of Technology, are openly sharing their research. Last year, the William and Flora Hewlett Foundation sponsored an open-source competition to spur innovation in automated writing assessments that attracted commercial vendors and teams of scientists from around the world. (The Hewlett Foundation supports coverage of "deeper learning" issues in Education Week .)
Thinking first of a summative purpose for a writing assignment—an assessment of student proficiency, AES can help overcome some of the weaknesses associated with human scoring. When humans score only for a summative purpose, for instance essays written for a final exam, they score quickly and often focus on superficial features that may be proxies for quality. Using AES can generate a second score for each essay when it is not appropriate to ask a second teacher to rate the essays. Two data points are always better than one. This can help avoid many issues with teacher scoring. One that is well-documented is the concept of drift. As a teacher scores a set of essays, the scoring tends to drift over time. A paper at the end of a set gets a different score than it might at the beginning of a set. Good teachers often go back and review the first couple essays scored and compare them with the last couple scored to make sure their ratings have remained consistent.