English pronunciation (2023)
Summary
Korean students struggle with English pronunciation, often because they were never taught well. For the main study, I created a one-month pronunciation training program for Korean college students, which cominbed phonetic explanations with some interactive activities. Though one month was too short for notable improvements in pronunciation, participant surveys showed an increased awareness of English pronunciation, and slightly improved motivation.
Methods: Pretest & posttest speaking test, pretest and posttest surveys, classroom experimental research design, hierarchical linear modeling using R
Results: Research article published; conference presentations in 2023; more and better teaching materials
Details
The main study investigates the effectiveness of a short-term pronunciation program for Korean students learning English, including the effectiveness of a short-term program for pronunciation improvement, possible effects of inductive learning, and improvements in motivation. Initial and post-surveys probed participants’ self-efficacy, motivation, and relevant personality factors. Pretest and post-test recordings were rated for pronunciation accuracy. In between were three classroom sessions involving group exercises on particular vowels, consonants, and prosody, such as lexical stress, rhythm, and linking. The sessions involved group learning, with some groups also experiencing more inductive style lessons. The results showed slight improvement in pronunciation, as well as in self-efficacy, despite the limited duration of the program. Some motivational and personality factors were correlated with their improvements. The more inductive style approach had no effect on outcomes, at least over the short term of this study. The study shows that phonetic explanations can be combined effectively with interactive activities, and that students’ motivation toward English speaking can improve with instruction. Also, better and more effective teaching materials resulted from this project.
In another study, Koreans listened to recorded samples of nonsense syllables, and the results indicated that they were not very aware of English stress and rhythm patterns. These results were presented at a conference.
Publications
- Lee, K. (2023). Improving English pronunciation accuracy among Korean adult learners. Modern English Education 24(2023) 272-283. https://doi.org/10.18095/meeso.2023.24.1.272
Conference presentations
- Lee, K. (2023). Pronunciation training in groups: Inductive learning and motivation. ELT conference, July 2023, Seoul.
- Lee, K. (2023). Koreans’ perceptions of English prosody. 2023 Aarhus International Conference on Voice Studies, August 2023, Aarhus, Denmark.
R code
# Import data set # Read the CSV file into a data frame ratings <- read.csv(file="/home/kent/Dropbox/phon/2023.pron.L2.proj/raters/raters.csv", header=TRUE, sep=",") impen <- read.csv(file="/home/kent/Dropbox/phon/2023.pron.L2.proj/results/pron2023.csv", header=TRUE, sep=",") # DESCRIPTIVE STATS library(psych) describe(mydata) describe.by(mydata, group,...) # CREATE new variable from others data$C <- (data$A - data$B) # Change the reference level of the variable "group" data$group <- relevel(data$group, ref = "treatment") # CHI SQUARE chisq.test(data_frame$treatment, data_frame$improvement, correct=FALSE) chisq.test(impen$jNachPron, impen$jNachVowels) , impen$jNachCons, impen$jNachRhyhm) # Reshape the data into a contingency table contingency_table <- table(data$jol1, data$jol2, data$jol3, data$jol4) # Perform the chi-square test result <- chisq.test(contingency_table) # Print the results print(result) impentable <- table(impen$jNachPron, impen$jNachVowels, impen$jNachCons, impen$jNachRhyhm) result <- chisq.test(impentable(jNachPron, jNachVowels, jNachCons, jNachRhyhm)) result <- chisq.test(impentable) print(result) "jNachPron" "jNachVowels" "jNachCons" "jNachRhyhm" #----------------------------- # Cronbach's alpha alpha(raters[,2:4]) alpha(raters[,3:5], check.keys=TRUE) # Perform a paired t-test on pretest and posttest columns t.test(data$pretest, data$posttest, paired = TRUE) t.test(posttest ~ TreatG, data = data) anov <- aov(weight ~ group, data = mydata) # Or equivalent GLM model model <- lmer(Posttest ~ TreatG + Pretest + (1|ID), data = data) # HLM library(lme4) model <- lmer(DaccGen1 ~ sms01Intrinsic + sms01ExReg + sms01Amot + tipiOpen + hrStudied + cond + accGen1pre + (1 | ncode1), data = impen) --- DID NOT WORK HERE t.test(impen$EnBgS1, impen$EnBgS2, paired = TRUE) t.test(impen$EnBgSpkS1, impen$EnBgSpkS2, paired = TRUE) t.test(impen$EnBgListS1, impen$EnBgListS2, paired = TRUE) t.test(impen$afs01Intrins, impen$afs02Intrin, paired = TRUE) t.test(impen$afs01Auto, impen$afs02Auto, paired = TRUE) t.test(impen$afs01Comp, impen$afs02Comp, paired = TRUE) t.test(impen$afs01Rel, impen$afs02Rel, paired = TRUE) t.test(impen$sms01Intrinsic, impen$sms02Intrinsic, paired = TRUE) t.test(impen$sms01IdReg, impen$sms02IdReg, paired = TRUE) t.test(impen$sms01ExReg, impen$sms02ExReg, paired = TRUE) t.test(impen$sms01Amot, impen$sms02Amot, paired = TRUE) t.test(impen$accGen1pre, impen$accGen1post, paired = TRUE) t.test(impen$accVowel3pre, impen$accVowel3post, paired = TRUE) t.test(impen$accCons4pre, impen$accCons4post, paired = TRUE) t.test(impen$prosGen5pre, impen$prosGen5post, paired = TRUE) t.test(impen$sylLeng6pre, impen$sylLeng6post, paired = TRUE) t.test(impen$accStress7pre, impen$accStress7post, paired = TRUE) t.test(impen$inton8pre, impen$inton8post, paired = TRUE) t.test(impen$fluss9pre, impen$fluss9post, paired = TRUE) t.test(impen$Comp2pre, impen$Comp2post, paired = TRUE) t.test(impen$gramm10pre, impen$gramm10post, paired = TRUE) t.test(impen$gramlexsof11pre, impen$gramlexsof11post, paired = TRUE) #To mean center a variable: mydata$Extroversion <- scale(mydata$Extroversion, center=TRUE, scale=FALSE) # HLM: model <- lmer(Diff ~ TreatG + Extroversion + Intrinsic + Extrinsic + AFS + (1|ID), data=mydata # Mean center the pretest scores data$pretest_mc <- scale(data$pretest, center = TRUE, scale = FALSE) # Fit the hierarchical linear model with pretest as a covariate and TreatG as the main predictor model <- lmer(posttest ~ TreatG + pretest_mc + Extroversion + Intrinsic + Extrinsic + AFS + (1 | ID), data = data) # View the model summary summary(model) #empen : 2023 En. pron. factors #-------------------------------------- nom ncode1 age gen grade enyears cond session major enCourses rs1file fs2file DEnBg DEnList DEnSpk EnBgS1 EnBgRdS1 EnBgWrS1 EnBgListS1 EnBgSpkS1 EnBgS2 EnBgListS2 EnBgSpkS2 Dafs DafsAuto DafsComp DafsRel afs01Intrins afs01Auto afs01Comp afs01Rel afs02Intrins afs02Auto afs02Comp afs02Rel DsmsInt DsmsIdReg DsmsExReg DsmsAmot sms01Intrinsic sms01IdReg sms01ExReg sms01Amot sms02Intrinsic sms02IdReg sms02ExReg sms02Amot tipiExtro tipiAgr tipiConsc tipiStable tipiOpen LnAnxG LnAnxInput LnAnxProc LnAnxOut jolG jol01 jolImprove jolremem jolDiffMat DVorNach VorDiff Nach jNachPron jNachVowels jNachCons jNachRhyhm Eff2 hrStudied accGen1pre Comp2pre accVowel3pre accCons4pre prosGen5pre sylLeng6pre accStress7pre inton8pre fluss9pre gramm10pre gramlexsof11pre accGen1post Comp2post accVowel3post accCons4post prosGen5post sylLeng6post accStress7post inton8post fluss9post gramm10post gramlexsof11post #-------------------------------------- describe(impen$jolG, impen$jolremem) impen$DaccGen1 <- (impen$accGen1pre - impen$accGen1post) chisq.test(impen$cond, impen$DaccGen1, correct=FALSE) t.test(DaccGen1 ~ cond, data = impen) model <- lmer(Posttest ~ TreatG + Pretest + (1|ID), data = data) anov <- anova(DaccGen1 ~ session, data = impen) impen$accGen1pre_mc <- scale(impen$accGen1pre, center = TRUE, scale = FALSE) #GLM model <- glm(accGen1post ~ EnBgS1 + afs02Rel + tipiOpen + accGen1pre_mc + cond, data = impen, family = gaussian(link = "identity")) model <- glm(accGen1post ~ EnBgS1 + afs01Intrins + tipiOpen + accGen1pre_mc, data = impen, family = gaussian(link = "identity")) model <- glm(EnBgS2 ~ EnBgS1 + afs01Comp + accGen1pre_mc, data = impen, family = gaussian(link = "identity")) model <- glm(EnBgSpkS2 ~ EnBgSpkS1 + afs02Comp, data = impen, family = gaussian(link = "identity")) model <- glm(EnBgListS2 ~ EnBgListS1 + afs02Comp, data = impen, family = gaussian(link = "identity")) model <- glm(sms02Amot ~ session + sms01Amot, data = impen, family = gaussian(link = "identity")) impen$cond <- relevel(impen$cond, ref = "z") model <- glm(jolImprove ~ EnBgS1 + afs02Comp + cond, data = impen, family = gaussian(link = "identity")) #Convert variable to factor and change reference category impen$cond <- factor(impen$cond) impen$cond <- relevel(impen$cond, ref = "z") #---------------------------- #Test for interaction effects #---------------------------- model <- glm(sms02Intrinsic ~ sms01Intrinsic * cond, data = impen, family = gaussian(link = "identity")) model <- glm(sms02Intrinsic ~ afs02Rel + hrStudied + sms01Intrinsic + sms01Intrinsic*cond, data = impen, family = gaussian(link = "identity")) model <- glm(sms02Intrinsic ~ sms01Intrinsic * condition + Amotivated * condition, data = impen, family = binomial) model <- glm(sms02Amot ~ afs02Rel + sms01Amot + cond, data = impen, family = gaussian(link = "identity")) model <- glm(sms02Amot ~ afs01Rel + sms01Amot + cond, data = impen, family = gaussian(link = "identity")) model <- glm(sms02Amot ~ afs01Rel + sms01Amot + cond + sms01Amot*cond , data = impen, family = gaussian(link = "identity")) model <- glm(sms02Amot ~ afs01Rel + sms01Amot + session + sms01Amot*session, data = impen, family = gaussian(link = "identity")) # Ordinal logistic regression library(MASS) model <- glm(jNachPron ~ jNachVowels + jNachCons + jNachRhyhm, data = impen, family = gaussian(link = "identity")) jNachPron, jNachVowels, jNachCons, jNachRhyhm model <- glm(sms02Amot ~ sms01Amot + hrStudied + jNachRhyhm, data = impen, family = gaussian(link = "identity"))