You might have listened to someone having a conversation with their smart device already and it may have seemed strange, but chances are high that this trend is here to stay.
Touch screens once turned around the way we interact with technology. Now Voice User Interfaces (VUIs) that allow us to communicate to computers using our own voice might mark the next major turning point in human-computer interaction.
Already accessible through smartphones or smart home devices, VUIs allow us to give commands to computers, eyes and hands free. Natural language processing, machine learning and virtual intelligence enable computers to understand the intent of a person’s request by analysing the spoken words. Done right, this should feel easy like a natural conversation, despite the technical challenge for computer scientists to create the technology in the first place.
Conversation between Human and Google Assistant with one follow-up question.
When evaluating a new technology, ‘accessibility’ and ‘usability’ are two major criteria to take into consideration. ‘Accessibility’ refers to the design of a product for people who experience disabilities, and ‘usability’ refers to the effectiveness, efficiency and satisfaction a user experiences while achieving an objective.
Benefits of VUI
VUIs can make information accessible to vision or manually impaired users who are unable to interact with screens and keyboards. In the past, users needed expensive screen readers or braille displays to be able to access information on the internet. Moreover, they can help in situations where users are temporarily not able to use their hands or eyes; while driving or cooking.
Interacting with VUIs requires little or no training because voice is a very natural form of communication for people and more intuitive than using a keyboard and mouse. Literacy, typing skills or tech knowledge is no longer a prerequisite for fast and effective access to information.
How can VUIs be used in learning and teaching?
Most of us speak faster than we can type, so being able to verbalise a Google search, instead of typing each character individually, makes information easily accessible and ubiquitous. It almost seems effortless to ask a device for the current UK Prime Minister’s name, the capital of Honduras or how to spell Woolloomooloo.
This might support students to research facts, check spellings, and look up words or synonyms without too much distraction from the main task. With the technology rising, more and more powerful language learning applications will enter the market to help students practice their listening or dialogues and rehearse vocabulary.
A VUI device can help a teacher with administrative tasks to set timers, pick a random number, or act as an assistant teacher to verify facts or definitions.
This English teacher on YouTube shows how to use an Amazon Echo in his lesson. He first describes how to make W-questions and then asks Alexa, the assistant behind the Amazon Echo device, for the answer.
In a classroom, the teacher could explain the theory as normal, set a 5-minute timer to give students time to come up with a Wh-questions and then students can ask the VUI device for an answer. Students will have to speak clearly and apply the right word order to make Alexa understand. This reinforces content and adds fun to the lecture.
Other imaginable scenarios in the classroom might include using your voice to advance to the next slide so you can be mobile and still control the screen, call for AV help without the need to fill out a service desk form, register students attendance or check the availability of a book in the library – the possibilities are (almost) virtually endless.
VUIs and cognitive load
The information that VUIs provide is accessible; it is fast, hands and eyes free and audio-only. Based on the Cognitive Load Theory, there is a limit to the amount of information we can take in through the auditory and visual channel, process in the working memory and finally transform into long-term knowledge (Sweller, 1988). VUIs reach their limit when it comes to complex answers, long lists or mathematical formulas because this leaves learners overwhelmed by high cognitive load.
VUIs devices, like smartphones, can also help manage cognitive load by providing audio to what is visually displayed on a screen. This will also help the listener to maximise their memory capacity. When you start using your Google Assistant for searches, you will soon see what kind of information can be well presented via audio and where it is still easier to use a visual display.
Getting started with a VUI
You can activate the Google Assistant on your Android or iPhone or iPad. Then you are ready for your first conversation. Start with ‘OK Google’ followed by your question. Here are some examples of what you can ask or have a look at the full list of what you can do.
Try something like:
- How do you spell [word]?
- What’s the definition of [word or term]?
- How do you say [word or phrase] in [language]?
- Pick a random number between one and fifteen.
Future of VUIs
At the current state, a conversation with a VUI is still often a one-sided affair, or quite limited to what the system is programmed to do. You say something and you get a response or vice versa – which is already quite impressive.
Nonetheless, VUIs are an extension to what we have now and opens up new opportunities to make information and technology accessible to people in various situations who didn’t have access to it before. It is still early days for VUIs in learning and teaching but with the improving technology, more opportunities and applications will become available.
Have you used VUIs in your learning or teaching practices, or you do know any great learning and teaching VUI applications? Contact Learning and Teaching Services or share your thoughts and ideas about the different ways students can engage in learning via Yammer, Twitter or LinkedIn.
- Sweller J (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science 12, 257-285.