In this episode, we are revisiting the Universal Design for Learning (UDL) framework, and beginning to explore the pillar of representation. We will talk about perception, and how to offer different modalities for students to help support their learning.
If you like what you hear, we would love it if you could share this episode with a colleague or friend. And make sure you subscribe so that you don’t miss out on any new content! And consider supporting the show by buying us a coffee or two!
We would love to hear from you – leave a comment on our website OR check out our FLIP!
This week, we are returning to UDL, and chatting about the second pillar in cast.org’s UDL framework, which is all about representation, specifically options for perception.
When we say perception, we are talking about multimodality, looking at how students can access information. This is the part of UDL that excites us the most, because it’s the part the is easier for teachers to see and begin to implement. It also tends to be effective for all learners, including English language learners.
Representation, and perception, tend to lend well to the idea of differentiation, which is another reason why it tends to be a more familiar landing spot or approach for teachers. Except with UDL, we are thinking about all of the potential barriers ahead of time and planning for them instead of differentiating in the moment or trying to accommodate for our leaners.
So UDL and this idea of multimodality is much more proactive; it contemplates the wide range of learners we have in our classrooms and has the teacher plan to reach all of them ahead of time.
We are going to start with some definitions or interpretations of concepts:
- how students access information (ie multimodalities)
- different ways that students learn; not one single way that is optimal for all learners
- acknowledges that students may perceive or comprehend information differently
In exploring this pillar and chatting about it, we have made some neat connections to the book study that we are doing with “Why don’t students like school?” by Daniel WIllingham. In the book, we have been reading about transfer or learning and representation, and how when teachers use multiple representations it allows students to be able to make better connections with and between concepts. We are also reading about how we need to consider background knowledge of students, and take their strengths to help them learn in our classroom.
What is perception itself?
In the case of UDL it is just one factor or section within the pillar of representation. In this sections, they talk about the importance of ensuring that key information is equally perceptible to all learners, and providing different ways, or modalities, or perceiving information.
When we use the term “multimodalities” we are referring to our different senses and ways of processing information; what we see, hear, touch, and providing information in a way that can be adjusted by the user.
This idea of being able to adjust the information or content makes use think about all of these fantastic educational technology tools that allows us to do this. That doesn’t mean that educators should just offer up a digital copy and let students figure it out. We need to make sure that students can manipulate the information to help make sense of it, and we need to make sure that information can be made more salient, if required, to help students learn.
As we make these changes as educators, it will be difficult or challenging at first, but then it becomes more proceduralized, becoming a part of your routine.
It’s also good to learn new tools, and to add them to your teaching toolbox. That way you are building knowledge of what works for students, and what options are available, so that you can use these tools as needed.
Checkpoints within Perception
The perception factor has three different checkpoints: offering ways of customizing the display of information, offering alternatives for auditory information, and offering alternatives for visual information.
These checkpoints are great ways to get us thinking about how students (and teachers) can manipulate lessons, handouts, or any learning materials in the classroom to help them learn more effectively.
When thinking about using videos, it’s worth looking at a few different aspects or options. Speed of the video, for example, is a feature to consider. Some students may need to slow down a video to better process what they are hearing so that they can understand it better. Some may also prefer to use closed captions in English, or in a different language, to help them better understand.
In terms of tools that we like for the use of videos, the favourite that stands out pretty clearly is YouTube. If you are creating your own videos, it can be scary to post them on YouTube, however you don’t have to set it to public; you can keep it unlisted so that others can search and find your video.
YouTube is also great for being able to speed up or slow down a video. AND, to top it off, YouTube can auto generate captions. It isn’t always the most accurate, but it does a pretty good job overall. Captions are also available in different languages, which is great for ELLs.
Another tool that allows you to add closed captions is Screencastify. You do need a transcript, but it isn’t too difficult to create one. And they also offer translation of closed captions as well – so it’s worth considering Screencastify if you’re already using this tool to record your videos as well. It all depends on what you are using, and what you are recording! If recording an ipad or some sort of screen where you are annotating, it may not make sense to use Screencastify – it all comes down to what you are doing, and what fits your purposes best.
In terms of visuals, it’s important to think about the size of text, images, tables, etc. as well as the contrast between the text and background. If you have students with low vision or colour blindness, contrast is particularly important.
Another aspect that is important to consider is the font that you are using. Rachel loves her fonts, but she gives good reminders about which fonts can be good for students, and which ones are difficult. For example, some fonts have the ‘a’ with the loopy top, where as others are the circle with the straight edge. Depending on what students have learned and been exposed to, the loopy top ‘a’ may be difficult.
It’s also not a good idea to use cursive fonts because they can be quite inaccessible and difficult for students to read. Not everyone learns cursive writing, and with such a diverse population, it’s not common to learn cursive writing in English in other countries. There are a wide variety of reasons why cursive writing is not understood by students, so it’s worth moving away from this type of font.
Rachel’s current font fave is “Poppins” for anyone that wants to check it out! She also enjoys the sans serif fonts, as they are cleaner and a little bit easier to read.
When we think about alternatives to auditory information, we need to remember that we can’t just stand in front of the class and lecture, and assume that that will be enough. We need to ask ourselves how we can make learning more accessible when we are presenting orally.
If you are using Google Slides, you can turn on captions while presenting, and it will put those on the screen in real time. This will help students make text to sound connections, which is helpful when trying to learn and remember content.
It’s also important to use visuals to help connect what students are hearing. Images, charts, diagrams, etc. are great ways to bring content to life and to help students make meaningful connections without getting overwhelmed.
It can be particularly difficult for students if they get stuck by a word or expressions, trying to figure out what it means, and then not following along as the teacher continues because they are still focusing on that one word or phrase. This could lead to missing out on a big chunk of a lesson. Having some sort of visual or textual adaptation for students to follow along with will help them to learn what is being taught, and will minimize challenges.
If using videos, it’s also worth considering providing written transcripts for videos or audio clips to your students. This will help students to follow along and better understand what they are hearing.
Read&Write and Mote are two great tools that can be used to make audio more accessible. Mote can create transcripts for audio comments or clips. Read&Write can do voice to text, and text to voice.
When creating presentations or lessons, it’s also important to make sure that we provide alternatives to visual information, such as images, diagrams, charts, etc. If there’s no context or explanation to these visual features, it can be difficult for students to understand what they are supposed to mean, or how they can help them understand the lesson. Consider using visuals, but also use transitions or annotations to help explain it all as well.
All of this can be challenging to do all at once, and nobody at all is saying that it should all be changed at once. Consider it like we do student work: must do, should do, and aspire to do. Set small goals that you can reach and make these small changes. The next time around, work towards further changing or adding accessibility to your lessons.
Here are the tools for the three checkpoints that we use the most:
- Google Slides
- Online Voice Recorder
The biggest takeaway for perception in UDL is the need to offer lessons and class materials in multiple modalities. Offer options, and make sure you’re explaining visuals, and providing textual clues for audio.