Western Australia Digitizes Indigenous Languages

The Wangka Maya Pilbara Aboriginal Language Centre has launched a series of four immersive apps produced in the Digital Innovation Hub the center has established with the funding. Designed to preserve the Pilbara region’s Indigenous languages, culture, and history, the apps communicate traditional stories and knowledge with a unique Indigenous perspective.

The app technology brings language and culture to life through authentic storytelling, original audio, supportive linguistic tools, evocative imagery, interactive features that include the ability to record oneself, and animation that makes the emu run and the dingo howl. The launch of the new apps demonstrates the center’s success in establishing the Innovation Hub, approved as a Commonwealth Grant in 2018. In the past 16 months, the center has set up infrastructure, appointed a senior linguist, developed protocols and plans, gained an app developer license, and learned how to use technology to communicate the Indigenous worldview.

The establishment has been supported by New Zealand Indigenous technology company Kiwa Digital. Speaking at the launch of the new apps at the 2019 International Year of Indigenous Languages Expo in Roebourne, center manager Julie Walker said: “Our digital project is making a vital contribution to the revitalization of Pilbara languages.
The goal is to bring the expertise, knowledge, and sensitivity of the elders of the Pilbara into the digital age.” Kiwa Digital CEO Steven Renata praised the center on its rapid take-up of new technology: “Pilbara languages are on a rapid path to digitization, with the Centre leading the way globally in adopting new techniques to convey unique Indigenous perspectives.”

The apps are available on the App Store and Google Play by searching for WANGKA MAYA.

New App Releases

Gurri Watharrigu Magaragu The Girl Is Looking for Her Little Brother

This app by author June Injie tells the story of a young girl who goes looking all over for her little brother. This story is told in the Yinhawangka language with English translation. Yinhawangka is a severely endangered language from the Pilbara region of Western Australia. The Yinhawangka people traditionally lived in the area containing the Angelo, Ashburton, and Hardey Rivers, Kunderong Range, Mount Vernon Station, Rocklea, and Turee Creek. There are currently a limited number of people who speak Yinhawangka. The app is a valuable contribution to the revitalization of Yinhawangka language and culture.

Thanamarra Ngananha Malgu What Are They Doing?

The app contains an engaging story about the habits of some animals, both native and introduced, that can be found in the Pilbara region. The story demonstrates the keen observation of animal behavior by the Yinhawangka storyteller, as expressed in her rich and concise language. This story is told in the Yinhawangka language with English translation. Yinhawangka is a severely endangered language from the Pilbara region. The Yinhawangka people traditionally lived in the area containing the Angelo, Ashburton, and Hardey Rivers, Kunderong Range, Mount Vernon Station, Rocklea, and Turee Creek.

Pilurnpaya Ngurrinu They Found a Bird

This children’s story with audio was told to Martu teacher Janelle Booth by children from Jigalong. It features Martu Wangka and English readings of the story, plus coloring-in images and extensive notes on the linguistic details on the Martu Wangka language. Martu Wangka, or Wangkatjunga (Wangkajunga), is a variety of the Western Desert language that emerged during the 20th century in Western Australia as several Indigenous communities shifted from their respective territories to form a single community. It is spoken in the vicinity of Christmas Creek and Fitzroy Crossing.

Kathleentharndu Wanggangarli Kathleen’s Stories

The stories in this app were first recorded in 2009 by Banyjima elder Kathleen Hubert with the help of linguist Eleanora Deak. Kathleen created the stories for her children and grandchildren, to help keep their language skills strong. The stories were originally published as five separate booklets and made available to Kathleen’s family. In 2014 Kathleen generously gave the Wangka Maya Pilbara Aboriginal Language Centre permission to publish the stories and make them available to the wider community.

This second edition was compiled by Annie Edwards-Cameron as part of the 2014 IBN Language Project.

Many thanks are due to Kathleen’s daughters May Byrne and Karen Hubert and granddaughter Dolly for their permission to reproduce their photos in these stories.

Kiwa Digital works with Indigenous groups around the world, using technology to preserve ancestral knowledge in formats that are relevant and accessible. For more, see www.kiwadigital.com.

The Wangka Maya Language Centre manages programs aimed at the recording, analysis, and preservation of the Pilbara region’s Indigenous languages, culture, and history. There are more than 31 Aboriginal cultural groups in the Pilbara and over 3,000 speakers. For more, see www.wangkamaya.org.au.

ACTFL ‘Shark Tank’ Chooses Woodpecker

During last month’s ACTFL World Languages Expo in Washington, DC, the Language Flagship Technology Innovation Center (Tech Center) hosted its own version of the popular TV show Shark Tank, in which five language apps were showcased to the audience and an expert panel.

Woodpecker Learning, a Taiwan-based start-up, was selected by the panel as the winner of the 2019 LaunchPad language education technology competition and won the People’s Choice award by a show of hands. Woodpecker is a state-of-the-art video player packed full of features designed to help learners improve world language skills. Learners practice commonly used vocabulary, tones, and accents while watching a huge library of popular shows from all over the world.

LaunchPad offers technology start-ups a unique opportunity to attend, showcase, and receive formal recognition at one of the world’s most comprehensive language education expositions.

Participation in the event affords LaunchPad finalists the opportunity to receive feedback from various sources, including experienced entrepreneurs and a highly specialized audience of world language educators. At the competition, the five finalists pitch their language technology innovations. Honorary plaques from ACTFL and the Tech Center are conferred to the winner, selected by five competition judges and a People’s Choice award determined by an audience vote. All finalists are given free space at the Tech Center booth to showcase and demonstrate their products at the Convention Expo. The following companies were represented at this year’s competition:

Peter Sutton—Woodpecker Learning (2019 jury and People’s Choice award winner)
https://www.woodpeckerlearning.com
Daniel Turcotte—Scholarcade (Spywatch Lex) 
https://spywatchlex.com
Brooke Stephens—StoryLabs
https://www.storylabs.online
Weerada Sucharitkul—FilmDoo
https://www.filmdoo.com
Moulay A. Essakalli—Zid Zid
https://zidzidkids.com

Frank Dolce, Banter CEO and 2018 winner, noted, “The LaunchPad competition at ACTFL gave Banter a platform to engage with decision makers from educational organizations all over the world.”

According to Julio C. Rodriguez, Center for Language and Technology director, “Although a single winner is named after the competition, the Tech Center’s goal is to create positive impact for all five competitors through exposure to thousands of language education professionals at the conference; access to other academic, government, and private networks; and inclusion in publications, promotional materials, and press releases.”

Up next is LaunchPad 2020, led by Richard Medina, faculty specialist in human–computer interaction at the Center for Language and Technology, which will be held at ACTFL’s Annual Convention and World Languages Expo in San Antonio, Texas, on Nov. 21, 2020. 

Applications from technology start-ups are currently being accepted. Application deadline is March 1, 2020.

Details about LaunchPad and application process: https://thelanguageflagship.tech/launchpad

The Tech Center, sponsor of the LaunchPad competition, is an initiative by the Defense Language and National Security Education Office (DLNSEO). The mission of the Tech Center is to enhance the Language Flagship experience through the effective use of technology. Learn more at https://thelanguageflagship.tech.

School Climate Changes EL Success Rates

Ohio’s Cleveland Partnership for English Learner Success—an alliance of Cleveland Metropolitan School District’s (CMSD’s) Multilingual Multicultural Education Office, the research office, and researchers from Regional Educational Laboratory Midwest—has prioritized identifying English learner student and school characteristics associated with student achievement and language proficiency and completed a study examining means and percentages of student and school characteristics and English learner student achievement in grades 3–8 from school years 2011/12 through 2016/17.

The study team examined these characteristics for English learner students in grades 3–8 each year separately, enabling the team to identify stable patterns while helping to uncover changes over time. To explore associations with achievement, the study developed a series of regression models that correlated student and school characteristics with student performance on statewide assessments while controlling for other key characteristics.
The study focused on the most recent year of English learner outcomes available—2016/17—to provide information that was most relevant to the current English learner student population and educational setting. Between 2011/12 and 2016/17, English learner students in the district increasingly spoke languages other than Spanish. The percentage of English learner students enrolled in the district’s Newcomers Academy increased, while the percentage of English learner students enrolled in bilingual schools decreased. The study also found that English learner students increasingly were enrolled in schools with school climate scores higher than the district average over the study period, and that the Newcomers Academy consistently had school climate scores more than a standard deviation above the district average. Student special education status and lower prior-year assessment performance were consistently associated with lower current student performance.

English learner students speaking Arabic tended to have lower levels of English language proficiency, while gifted and female students tended to have higher English language proficiency. Students had lower mathematics achievement when they attended a school with larger numbers of English learner students per bilingual paraprofessional, and they had lower speaking-proficiency levels when attending schools with larger numbers of students per certified ESL teacher, but these school staffing characteristics were not clearly associated with the other student outcomes studied. School climate domains were positively associated with student speaking-proficiency levels but not with most other student outcomes.

The study findings suggest further work to gain a deeper understanding of how school climate may support English learner student language proficiency and achievement; examining how specialized schools like the district’s Newcomers Academy may support positive school climate; and considering the role of staff specialized to work with English learner students.

#EnglishLearner #ELL #ESL #Ohio

Books Best Practice for Reading Comprehension

Books have broad vocabulary and diverse language structures that are important for developing the ability to understand content. “Long, continuous texts with diverse and colorful vocabulary improve the skill to understand the content of the text,” says associate professor Minna Torppa. With other researchers of education and psychology from the universities of Jyväskylä, Turku, and Eastern Finland, she participated in a large project that studied children’s free-time reading habits and their effects. The results were published in the international journal Child Development. The researchers monitored how the reading skills of 2,525 schoolchildren developed from the first grade to the ninth grade. Reading skills were assessed in school classrooms and the leisure reading reports from the youngest children were collected from parents, while pupils in higher grades also reported personally. The research subjects came from four large municipalities in different parts of Finland. Basic reading skills are practiced extensively in schools and fluency in the Finnish language is usually gained during the first two grades.

We know that children who read books are better readers on average. But does the difference result because they read more or is it that better readers tend to read more? “It is natural to think that you become a better reader by reading more. On the other hand, we know that dyslexia is hereditary and for some the development of reading skills is very slow,” Torppa says. In recent years, international studies have demonstrated that reading skills predict how much a person will read—but no reverse connection has been found. This is understandable especially in the early grades: when reading is still slow and laborious, it is difficult or even impossible to read a book. Earlier studies have been limited to the early phases of developing reading skills. However, this study focused on the connection between reading skills and how much people read using a more multifaceted method and a longer timespan than earlier studies have. “In addition to books, we also wanted to study other text types such as newspapers, magazines, and digital texts,” Torppa says. The results showed that both reading fluency and reading comprehension affected the amount of reading during the first four grades. Nevertheless, it was found that book reading predicts better reading comprehension. Be one’s skills strong or weak, the level improves by reading.

First graders who read literature in their free time understood texts better also in the later years. Anyway, if a child becomes enthusiastic about reading at any age, even as late as in secondary school, it will show in the following year as better reading comprehension.
Newspapers and magazines had no significant effect either, but more frequent reading of short digital texts such as messages or social media predicted poorer reading comprehension in grades 4–7 and vice versa.

Books were shown to be especially good practice for reading comprehension, because they are linguistically rich and have a diverse vocabulary. For example, one of the Harry Potter books contains more than 250,000 words. “Skills development leads to an increase in the amount of reading, but eventually book reading improves reading comprehension,” Minna Torppa concludes.

#readingcomprension #books #childdevelopment

Jan. 4 is #WorldBrailleDay

Today (January 4) was proclaimed by the United Nations General Assembly in November of 2018 as World Braille Day to raise awareness of the importance of braille as a means of communication for blind and partially sighted people.

The World Health Organization (WHO) reports that people who are visually impaired are more likely than those with full sight to experience higher rates of poverty and disadvantages which can amount to a lifetime of inequality.

There are an estimated 39 million blind people worldwide, while another 253 million have some sort of vision impairment. For many of them, braille provides a tactical representation of alphabetic and numerical symbols, so they can read the same books and periodicals printed as are available in standard text form.

The UN Convention on the Rights of Persons with Disabilities (CPRD) cites braille as a means of communication; and regards it as essential in education, freedom of expression and opinion, access to information and social inclusion for those who use it.

To foster more accessible and disability-inclusive societies, the UN launched its first-ever flagship report on disability and development in 2018, coinciding with the International Day for Persons with Disabilities on which Secretary General António Guterres urged the international community to take part in filling inclusion gaps.

“Let us reaffirm our commitment to work together for an inclusive and equitable world, where the rights of people with disabilities are fully realized,” he said.

What is Braille?

Braille is a tactile representation of alphabetic and numerical symbols using six dots to represent each letter and number, and even musical, mathematical and scientific symbols. Braille (named after its inventor in 19th century France, Louis Braille) is used by blind and partially sighted people to read the same books and periodicals as those printed in a visual font. Use of braille allows the communication of important information to and from individuals who are blind or partially sighted, ensuring competency, independence, and equality.

Night writing, the precursor to braille, was invented by French army officer Charles Barbier. It was intended for use by soldiers as a means of communicating at night without the use of sound and light. Ultimately, the French military rejected night writing, claiming it was too difficult for soldiers to use. While attending France’s Royal Institute for Blind Youth, Louis Braille learned of Barbier’s invention and attempted to improve upon it. Braille published his work in 1829 and it has since been adapted to many of the world’s languages.

The Valentin Haüy Association, where Louis Braille worked over a hundred years ago, continues to promote the use of braille in France, while translating documents and books into French braille.

It is also trying to find ways to ensure that braille, and its readers, move into the digital age – including helping people learn to use digital braille keyboards, print braille papers, and make websites accessible.

Currently, less than 10% of French internet sites are accessible to people with a visual, hearing, or motor disability.

Trump Signs Measure To Extend Grant Programs to Preserve Indigenous Languages

U.S. President Donald Trump signed a measure that extends federal grant programs aimed at preserving Indigenous languages and expands eligibility so more tribes can participate. The measure cleared the House with bipartisan support, while Senate approved the measure in June this year.

The legislation, named the Esther Martinez Native American Languages Programs Reauthorization Act, was sponsored by Senator Tom Udall of New Mexico. The legislation was named after Esther Martinez, a traditional storyteller and Tewa language advocate from New Mexico’s Ohkay Owingeh Pueblo. She died in 2006. Currently, there are over 40 active grants totalling more than $11 million U.S., that are being used for language preservation and immersion efforts.  According to the CBC, Martinez’s Pueblo was awarded a grant earlier this year after seeing a decline in fluent Tewa speakers and the increase of English as the primary language in the homes of tribal members.

U.S. Rep. Deb Haaland, a New Mexico Democrat and Laguna Pueblo member who co-chairs the Congressional Native American Caucus, said programs that support language preservation are often underfunded.

“Now that our bill honoring the legacy of Pueblo storyteller and self-taught linguist Esther Martinez is signed into law, we will move forward on important work to revitalize our languages and traditions,” Haaland said.

Love & Other Emotions Prove to not be Universal Across Languages

Humans have a breadth of emotions and are on a constant search to express them through language, though sometimes we find that words in one language don’t have a translatable counterpart in another. Norwegians say forelsket, which describes the feeling and experiences at the very beginning of falling in love, while the indigenous Baining people of Papua New Guinea say awumbuk to describe a social hangover that leaves people unmotivated and lacking energy for days after the departure of overnight guests. Author Joshua Conrad Jackson states, “Translation dictionaries, for example, suggest that the English word love can be equated with the Turkish word sevgi and the Hungarian word szerelem. But does this mean that the concept of “love” is the same in English, Turkish, and Hungarian?” While there are different words for specific emotions in various languages, one may ask—do people experience emotions differently depending on the languages they speak? A new study suggests so.

The study, Emotion semantics show both cultural variation and universal structure, was published in Science, and studied emotion semantics across a sample of 2474 spoken languages from 20 different language families using “colexifications”—instances where a single word has multiple meanings.

 There is a growing recognition that emotions can vary greatly in their meanings across languages and culture, and that emotional concepts such as “anger” and “sadness” do not derive from actual brain structures, but from humans making socially-learned inferences about the meaning of the word and the actual bodily feeling associated with the word.

The researchers found significant differences in how emotions were conceptualized across languages and culture—three times more variation than in terms describing color. Emotion concepts had different patterns of association in different language families. For example, “anxiety” was closely related to “fear” among Tai-Kadai languages, but was more related to “grief” and “regret” amongst Austroasiatic languages. By contrast, “anger” was related to “envy” among Nakh-Daghestanian languages, but was more related to “hate,” “bad,” and “proud” among Austronesian languages. Researchers interpreted these findings to mean that emotion words vary in meaning across languages, even if they are often equated in translation dictionaries. Interestingly, some Austronesian languages paired the concept of love, a typically positive emotion, with pity, a typically negative one.

On the other hand, researchers also found underlying similarities. Language families tend to differentiate emotions based on how pleasant and exciting they are, so for instance words expressing fear were unlikely to be grouped together with those that express joy.

“This is an important study,” says William Croft, a professor of linguistics at the University of New Mexico, who wasn’t involved in the work to Scientific American. “It’s probably the first time an analysis of the meanings of words has been done at this scale.” One of the novel things about this project is that the findings show both universal and culture-specific patterns, Crofts adds. He points out, however, that because some of these families cover a large number of languages across a wide geographical area, it will be important to further examine the underlying cultural factors.

China’s Baidu Uses AI Understanding in Chinese to Learn English

China’s giant tech company Baidu, has surpassed both Microsoft and Google when it comes to AI and language learning. The company, which is sometimes referred to as China’s Google, achieved the highest ever score in the General Language Understanding Evaluation (Glue), which is widely considered to be the benchmark for AI language understanding. It consists of nine different tests for things like picking out the names of people in a sentence, and figuring out what a pronoun like “it” refers to when there are multiple potential options. The average person scores about 87 points out of a hundred on the Glue scale—Baidu is the first to score over 90. The company used it’s own AI language model, called ERNIE (which stands for “Enhanced Representation through kNowledge IntEgration”).

According to Karen Hao of MIT Technology Review, what’s notable about Baidu’s achievement is that it illustrates how AI research benefits from a diversity of contributors. Baidu’s researchers developed a technique with Ernie that was specifically for Chinese. This proved, however, to make it better at understanding English as well.

Baidu’s ERNIE was modeled after Google’s BERT (Bidirectional Encoder Representations from Transformers), which was created in 2018. Before BERT, natural language models had much lower capabilities, and could only predict words for applications like Autocomplete, but couldn’t sustain a train of thought. When BERT came along, it considered the context before and after a word at once, making it bidirectional and able to but each word in it’s complete context.

The Baidu researchers took this idea further, and trained ERNIE to predict sets of missing words, which is essential for understanding Chinese, in which individual words rarely work alone. While BERT specialized in predicting words, ERNIE was able to predict phrases. This ability had great crossover into English, making it able to predict entire sets of words. Just as Chinese, English has words that have different meanings depending on their contexts.

“When we first started this work, we were thinking specifically about certain characteristics of the Chinese language,” says Hao Tian, the chief architect of Baidu Research. “But we quickly discovered that it was applicable beyond that.”

Sources:

https://arxiv.org/pdf/1907.12412.pdf

https://www.technologyreview.com/s/614996/ai-baidu-ernie-google-bert-natural-language-glue/

School-Safe Video Platform

Boclips for Teachers is a video platform that offers an extensive library of engaging content to bring academic topics to life for all ages while deepening students’ understanding of a variety of subjects. Videos that support language learning across English, Spanish, French, Arabic, Mandarin, and Sign Language are available.

The platform is a school-safe alternative to consumer-focused video platforms that do not filter for age-appropriateness, can be clogged with ads or toxic comments, and are blocked by many schools and districts. By simply cutting and pasting links, educators can supplement lessons in any school subject with short video clips that capture and keep students’ attention. The platform gathers video clips from more than 120 partners, including Language Tree, Mazz Media, Extra English Practice, and Rachel’s English. New material is added daily and supports Common Core State Standards, and the archived news content going back 100 years provides real-world examples of classroom topics.

To find relevant material, educators simply type a topic into the search bar and then create collections of videos, each of which comes with a list of specific, intentional learning outcomes. Collected by a team of former educators, the videos are free of copyright issues and commercial content to keep the focus on what really matters: engaging students to deepen and advance learning.


The name Boclips comes from the story of Buddha. Buddha slept under the Bodhi tree, and when he woke up, he understood the world. The company’s mission is to live up to its name and provide enlightening clips that engage students and help them better understand the world they inhabit.

Speech May Be Ten Times Older Than Previously Thought

New research suggests that speech may not be dependent on the “descended larynx” which could mean that our ancestors were talking 20 million years ago

Baboons raised in semi-liberty produce about ten vocalizations, associated with different ethological situations, that may be considered as proto-vowels, at the dawn of the emergence of speech. © Laboratoire de Psychologie Cognitive (CNRS/Aix-Marseille Université)

For 50 years, the theory of the “descended larynx” has stated that before speech can emerge, the larynx must be in a low position to produce differentiated vowels. Monkeys, which have a vocal tract anatomy that resembles that of humans in the essential articulators (tongue, jaw, lips) but with a higher larynx, could not produce differentiated vocalizations. Researchers at the CNRS and the Université Grenoble Alpes, in collaboration with French, Canadian, and U.S. teams, argue in a new article published in Science Advances that monkeys produce well differentiated proto-vowels. The production of differentiated vocalizations is not therefore a question of anatomical variants but of control of articulators. This work leads us to think that speech could have emerged before the 200,000 years ago that linguists currently assert.

Comparison of the anatomy of the vocal tract of baboons (on the left) and modern humans (on the right). The same articulators, with their muscles, bone and cartilages, but in humans the larynx is lower, increasing the relative size of the pharynx relative to the mouth. The acoustic analysis of monkey vocalizations shows that, despite this anatomical difference, they can produce differentiated “proto-vowels” that we can compare with vowels of world languages. © Laboratoire de Psychologie Cognitive (CNRS/Aix-Marseille Université) et GIPSA-lab (CNRS/Université Grenoble Alpes)

Since speech can be considered as being the cornerstone of the human species, it is not surprising that two pairs of researchers (in the 1930s-1950s) tested the possibility of teaching a home-raised chimpanzee to speak, at the same time and under the same conditions as their baby. Their experiments ended in failure. To explain this result, U.S. researcher, Philip Lieberman, proposed the theory of the descended larynx (TDL) in 1969. By comparing the human vocal tract to monkeys, Lieberman argued that chimpanzees have a small pharynx, related to the high position of their larynx, whereas in humans, the larynx is lower. This anatomic difference was claimed to prevent differentiated vowel production, which is present in all the world’s languages and necessary for spoken language. Despite some criticisms and many acoustic observations that contradict the TDL, it would come to be accepted by most primatologists.

More recently, articles on monkeys’ articulatory capacities have shown that they may have used a system of proto-vowels1. Considering the acoustic cavities formed by the tongue, jaw, and lips (identical in primates and humans), they showed that production of differentiated vocalizations is not a question of anatomy but relates to control of articulators. The data used to establish the TDL came in fact from cadavers, so they could not reveal control of this nature.

This analysis, conducted by pluridisciplinary specialists in the GIPSA-Lab (CNRS/Université Grenoble Alpes/Grenoble INP), in collaboration with the Laboratoire de Psychologie Cognitive (CNRS/Aix-Marseille Université), the University of Alabama (USA), the Laboratoire d’Anatomie de l’Université de Montpellier, the Laboratoire de Phonétique de l’Université du Québec (Canada), CRBLM in Montréal (Canada) and the Laboratoire Histoire Naturelle de l’Homme Préhistorique (CNRS/Muséum National d’Histoire Naturelle /UPVD), opens new perspectives: if the emergence of articulated speech is no longer dependent on the descent of the larynx, which took place about 200,000 years ago, scientists can now envisage much earlier speech emergence, as far back as at least 20 million years, a time when our common ancestor with monkeys lived, who already presumably had the capacity to produce contrasted vocalizations.

Click here for videos of baboons producing vocalizations similar to vowels.

Reference: Which way to the dawn of speech: Reanalyzing half a century of debates and data in light of speech science. Louis-Jean Boë, Thomas R. Sawallis, Joël Fagot, Pierre Badin, Guillaume Barbier, Guillaume Captier, Lucie Ménard, Jean-Louis Heim, Jean-Luc Schwartz. Science Advances, le 11 décembre 2019. DOI : sciadv.aaw3916

Language Magazine