Man vs Robot - VivaLing

Teaching Languages to Children : Man vs. Robot

At the recent EdTechXAsia 2016 event, an eminent speaker confirmed what all have been witnessing: contrary to initial fears, technology has not replaced teachers. But, he warned, “teachers proficient with technology will very soon replace those who are not.” The speaker knew what he was talking about : he was none other than  Dr. Janil Puthucheary, Minister of State at the Ministry of Education of Singapore, the country that topped all global PISA rankings in 2016.

The digital leap and the rise of the (good) teacher are two of three current mega trends that we previously explored while reflecting on the future of language learning.  These two phenomena are intertwined. With the coexistence of Man and Robot, there will be dramatic adjustments and power shifts. There will be winners and losers. At this stage you may be wondering what to do to remain off the endangered species list.

We very much agree with Dr Puthucheary’’s view that teachers’ inherent value is increased by their ability to leverage technology. As a facilitator in an enhanced learning environment, the tech-enabled teacher offers more and better learning choices to her students. But this is only the beginning of the story. The rest of the story is that many teaching tasks are now performed better by machines than they are by humans. “Better” can be understood as more consistently, more accurately, more effortlessly, more teaching-effectively or more cost-effectively. Is there any need left for humans when it comes to enunciating a grammar rule, teaching vocabulary, drilling, correcting pronunciation, consolidating knowledge? There isn’t. As a matter of fact, when a teaching task can be fully and unambiguously described as “specialized, routine, predictable” (as Martin Ford, the author of The Rise of the Robots, put it in 2015), chances are machines have already taken over.
The saving grace for teachers is that several of the language learning drivers (as introduced in VivaLing’s ViLLA © ) remain much better activated nowadays by Man than they are by Robot. Let us go over these language learning drivers, from the least to the most favourable of Man over Robot.

 

Man vs Robot - VivaLing

  1. Consolidation. In addition to the natural occurrence during sleeping phases, knowledge consolidation happens when memory is retrieved at the right time and in the right manner.  Robots are already more effective at implementing well-known spaced repetition algorithms. They are also improving at memory retrieval techniques which diversify the ways a given piece of knowledge is tested, activated or reinforced.
  2. Language quantity. Computers are already tireless when it comes to offering unlimited language input to learners. Their ability to bring about learner output i.e. language production, however, is more difficult to control. As to providing feedback, today it can only happen in very structured environments such as Multiple Choice Questions or True / False questions, but not in natural language.
  3. Attention. Is the learner’s attention wandering randomly? A teacher can help them focus on the right elements. Machines can too, when highlighting specific elements to focus on. But the risk remains that the learner’s attention will just drift away, in the absence of a “big brother” watching and with the computer environment sometimes even adding to the distraction.
  4. Motivation. Machines have already made significant progress to satisfy extrinsic motivation by providing badges and rewards. But humans still have a significant edge by the timely and adapted encouragement they can provide with the right choice of words and body language. They can also outperform machines in personalization (content and pace), which greatly enhances learner motivation. However truly adaptive learning is high on robot makers’ roadmap and catching up fast.
  5. Social interaction. This is where the ultimate human advantage lies. Social interaction is an absolute requirement for younger children, and strongly recommended for true communicative language learning at all ages. As long as robots cannot fool children, human teachers will remain more effective at teaching. A few weeks ago, a famous US language app at the leading edge of technological disruption launched its chat bots. But after trying them out, we were surprised to note that these bots chat only in writing and in a rigidly structured context, make unexpected grammar mistakes and even used … suspiciously flirtatious vocabulary. They are still very far from matching authentic human interaction.

When adding a historical perspective to all the language-learning drivers, it becomes apparent that Robots are increasingly encroaching on what used to be Man’s exclusive teaching territory. For some drivers, such as consolidation or language quantity, the Robot has already made huge inroads and will soon undeniably and irreversibly overtake Man. Regarding other drivers, such as social interaction, Robots are further or even much further off. But let us keep in mind that Google’s AlphaGo beat the world’s best Go player in the world decades before it was anticipated. Artificial intelligence is making steady progress and it will most likely take no more than a generation or two for a bot to fool a child language learner.

 

Man vs Robot - VivaLing 2

It is even more important today for teachers to master the technology that is available, and to elevate their teaching skills to specific domains and levels still protected from the rise of the Robot. If a teacher is simply requested to deliver a pre-scripted lesson without being able to deviate from it, let there be no mistake: the teacher will be replaced by a Robot before they know it. But if they nurture the pedagogical expertise and social skills to truly offer a superior language learning experience to the learner, they will thrive.

Teachers are not naturally equipped with these skills, and are not sufficiently prepared to embrace their human advantages in traditional teacher training programs. This is why programs such as VivaLing’s VOLT-YL  for teaching languages to children online are progressively preparing them adjust to the fast-changing teaching paradigms.

 

The Future of Education: What Will Education Look Like in 2025?

According to the professionals who participated in the new 2025 Education Innovation Survey Report *, in 2025 the key methods of engaging with material and content will evolve to be real-time video collaboration and mobile devices. What are the 5 key trends for the future of education? VivaLing would like to share the main takeaways of this report with you.

 

future-of-education

 

  • The ability to learn anywhere and at any time

Accessibility for all those who want to learn is considered to be the most important factor in the future of education success. Schoolprofessionals from around the globe (25%) ranked accessibility above all other factors; this view was most pronounced in respondents from the UK (31%). In the context of education, accessibility refers to the geographical aspect: that distance is overcome in order to deliver education to where it is needed. Convenient access to education is also factored in: that students and professionals have the ability to learn anywhere and at any time.

  • Real-time video collaboration with real teachers

67% of school professionals consider the focal point of education delivery to be the teachers and lecturers themselves.  However, the use of remote learning technologies in teaching is expected to rise significantly: 53% of professionals believe real-time video collaboration and mobile devices will be the primary way students engage with content by 2025. Despite this shift, many professionals still believe that the teachers and lecturers will continue to play an important mentoring role in 2025.

By allowing an engaging, accessible, and cost-effective approach to education, technology opens up the prospect of higher education, personalized courses, and teacher-training to a much broader population.”

  • Improving the quality of teacher-learning, and personalized and contextual learning should be the main focus

A majority of teaching professionals across the globe are convinced that the main focus, after deregulation and revised compliance standards, should be on improving the quality of teacher learning. Those in North America (18%) and in India (21%) feel that the creation of a more personalized and contextual learning would also be worth focusing on.

 

factors-elearning-the-future-of-education

 

  • More online access to education materials

According to 47% of the people interviewed (the majority being from North America and the UK) online access to content and lectures is what students and parents are demanding more of, from the   institutions.

  • More resource sharing online and self-learning for teachers

In 2025, resource sharing via online channels will better facilitate teachers’ professional development. School professionals see teachers sharing resources within online environments and becoming more independent in identifying their own professional learning needs.

NB: This survey covers mainly North America, United Kingdom, Australia, New Zealand and India. The rest of Asia is not covered.  However the trend towards online education in Asia is much stronger, especially in China.

* 2025 Education Innovation Survey Report by Polycom. More than 1,800 people from a range of professions within the education industry participated in the survey, with more than 80% above the age of 30. The majority of response comes from North America, United Kingdom, Australia, New Zealand and India. The majority of participants were management and c-suite (26%), educators (47%) and those in administrative roles (27%).

http://www.polycom.com.au/forms/education-2025-thankyou.html

 

Perceiving Sound Contrasts : Before 1 Year of Age, or Never

Babies are born universal listeners. In the first months of their lives, they can discriminate all the sounds produced by human beings. It is no small feat: you and I cannot. A typical Japanese adult is unable to hear the difference between an English /l/ and /r/. A typical English adult cannot detect the nuance between Mandarin /ɕ/ and /t͡ɕ/. The various /k/ and /q/ sounds in native American languages are not discriminated by non-Indian American adults. And Catalan mid-vowel contrasts (/e/, /ɛ/) are difficult to perceive even for adult Castilian-speaking Spaniards.

Babies keep listening. And while listening, their brains take statistics on the languages spoken in their environment. What happens towards the age of 1 is an incredible linguistic transformation. Babies get better at discriminating the sounds of their own language (native contrasts); but they completely and irremediably lose the ability to detect the sound differences present in other languages but irrelevant to theirs (non-native contrasts). The age of this transformation is known as the critical phonetic period. It is the clearest of all language critical periods.

Incredibly, before they reach that period, babies can be trained in other languages. In her memorable 2010 TED Talk, Patricia Kuhl describes how twelve sessions delivered by a Mandarin speaker to American babies had the same effect as ten and a half months of native Mandarin speaking on Taiwanese babies: at the end of the experiment the two groups were equally good at the perception of Mandarin contrasts.

Two more miracles are to be highlighted. First, the baby brain is a social one. If the baby is exposed to other languages’ sound contrasts by a recording, be it a full video or just an audio track, their performance is as bad as if there had been no training at all. But if the baby is trained by a living person, then the capability to perceive contrasts becomes as good as if the baby had been a native speaker. Sarah Roseberry, a  researcher in Patricia Kuhl’s lab, demonstrated in 2011 that the social impact is felt irrespective of the person’s physical presence or not : a Skype-like online, synchronous video interaction will have the same effect.

The second miracle is how adults could figure out whether babies perceive or not the sound contrasts. Babies, obviously, cannot speak, do not understand what the researchers are looking for and could not express consciously the results anyway. In a 2009 video, Derek Houston summarizes three common methodologies historically used to investigate infant speech discrimination skills :

–          High Amplitude Sucking (HAS) : sensors measure the amplitude and speed at which a baby sucks a pacifier. When different sounds are played (and perceived as different) the sucking response changes.

–          Conditional Head Turns (CHT) : the baby is taught to turn their head when they get a specific signal, in this case when they hear a sound contrast. If they do not turn their head it means they cannot hear a sound contrast.

–          Visual Habituation Methods (VHM) : when habituated to a given sound, babies’ looking time at a visual display tends to decrease. A sudden increase means babies have detected a novel sound.

Today scientists increasingly resort  to electrophysiological and neural imaging techniques, as mentioned in Kuhl’s video. This gives us a direct glimpse at what is happening inside the brain, and whether or not sound contrasts are perceived.

 

For more information :

Houston, D. M., Horn, D. L., Qi, R., Ting, J. Y. and Gao, S. (2007) : Assessing Speech Discrimination in Individual Infants. Infancy, 12: 119–145

Kuhl, P. K., Tsao. F.-M., & Liu, H.-M. (2003). Foreign-language experience in infancy: Effects of short-term exposure and social interaction on phonetic learning. Proceedings of the National Academy of Sciences, 100, 9096-9101