1.23.13 – Multimodality

How do we conceptualize multimodality in a class about computers and writing? I think back to those learning style tests that I took in elementary school, which hopefully labeled students as audio learners! tactile learners! visual learners! and the like. These tests are interesting in that they set up modes of learning as dichotomous. You are a visual OR auditory learner. Not both.  I think that I (in my unconscious heart-of-hearts) haven’t broken from this bubble-test frame: I see modalities as distinct, fragmented ways of receiving, producing, and experiencing information. Additionally, I think that I see written, printed text as somehow mode-less. As if such pages are blank, and, because they provide no audio, tactile, or visual (by way of pictures) stimulation* that they just don’t count all that much when we consider the importance of providing multiple points of accessing information. I realize that this oversimplifies what texts can and do do–but I’m just relaying the non-analytical reactions my brain has to this term.

But what do other people think?

This is how to teach using multimodal strategies;

This is what your kid’s bedroom looks like if it’s multimodal;

There are multimodal medical imagining technologies;

Merriam Webster (oh so helpfully adds): “having or involving several modes, modalities, or maxima”;

Here’s an interesting use of the term recorded by the OED: 1993   Fort Collins (Colorado) Triangle Rev. 15 July 24/3   We have been moving..from a multimodal community where people walk, bicycle, bus and carpool..to a one person/one auto Los Angeles.

A google image search also produced a pineapple, a picture of a panda riding a rocking horse, and pictures of children looking a computer screens . . .

Am I any closer? Hmmm.

The panda, looking to be helpful after her ride, wanted me to add these questions: 1) How many “modes” of representing meaning need be present to have a representation be considered multimodal? 2) Can anything exist that is singularly modal-ed? 3) Is a screen-based text actually less multimodal than a print-based text that you can touch and hear (if you crinkle the paper)? 3) Does multimodality only matter/count if there are multiple modes conveying the same message (since the paper on which “Hypertext and the Remediation of Print” might crinkle, but still not add anything to the article’s conversation)?  4) Is the ‘tactile’ modality of the screen authentic? Like when someone touches their iphone to “move” something (a symbol–not an object), is this movement as rich and physical as interactions in the “real” world? 5) Is it even useful to label things as multimodal–doesn’t that act of labeling suggest that somethings are not multimodal (which me and the panda agree is looking like a more and more dubious assertion)? 6)  It seems that the wide use term multimodality–as applied to technology, learning modalities, education–began in the age of computers. Maybe that’s why there is such a strong association between multimodality and technology. We grew accustomed to use it as a way to describe the new tools that were at our disposal. Do we need to think about how the term applies retroactively to other things–to everything?


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s