This week, our readings focused on assessing multimodal texts. It’s interesting to move from discussing the influence and creation of multimodal texts to discussing how we should assess them. Before these readings, I would have included program or software-based categories on multimodal assignment rubrics. For example, a rubric on a Prezi would include a category such as “use of Prezi design strategies” and subcategories like “zoom, color, text, and multimedia.” Now, I know better.
Borton and Huot (2007) include example criteria for assessing multimodal compositions, including purpose, audience, tone, idea, transition, relevancy, and context (101). What I notice about all of their proposed criteria is that the criteria are global concerns instead of local. By this logic, I should not include such program-specific criteria that are specific to just one piece of software or site. The advantages are for both students and teachers; students do not have to worry about mastering Prezi to get a good grade on their presentations, and the teacher isn’t nitpicking student’s design choices. The takeaways for students, from being assessed more globally, are a larger sense of rhetorical context regarding their multimodal project. They are less focused on mastering Prezi for a grade, and more focused on the goals of the assignment. Borton and Huot call this “instructive assessment” (102).
What will I take away from these readings? I’m revising my multimodal assessment practices according to both formative and instructive assessment, and tying that to more broad multimodal learning opportunities.