Multidisciplinary Collaborative Journal
|
Vol
.
0
4
| Núm
.
0
2
|
Abr
–
Jun
| 202
6
|
https://mcjournal.editorialdoso.com
ISSN:
3073
-
1356
54
Article
The use of AI tools (ChatGPT, Grammarly, DeepL) in self
-
directed English learning at Ecuadorian universities
El uso de herramientas de inteligencia artificial (ChatGPT
, Grammarly, DeepL)
en el aprendizaje autodirigido del inglés en universidades ecuatorianas
Margarita Elisa Montero Bastidas
1
*
and
Lady Viviana Quintuña Barrera
2
1
Universidad
Agraria del Ecuador,
Ecuador,
Milagro
;
https://orcid.org/0009
-
0007
-
6875
-
4967
2
Universidad
Agraria del Ecuador,
Ecuador,
Milagro
;
https://orcid.org/0009
-
0005
-
6325
-
6630
,
lquintuna@uagraria.edu.ec
*
Correspondenc
e:
mmontero@uagraria.edu.ec
https://doi.org/10.70881/mcj/v4/n2/150
Abstract
:
The integration of artificial intelligence (AI) tools into language
teaching and learning has grown exponentially in recent years. This study
examines the use of ChatGPT, Grammarly, and DeepL as resources to
support self
-
directed English learning among uni
versity students in Ecuador.
Using a quantitative descriptive design with a sample of 40 students from the
Universidad Agraria del Ecuador (UAE), a validated questionnaire was
applied to measure frequency of use, perceived usefulness, and impact on
learnin
g autonomy. Results indicate that 90% of participants regularly use
ChatGPT, followed by Grammarly (80%) and DeepL (70%), with all tools
receiving perceived usefulness scores above 4.0 on a 1
–
5 scale. Statistically
significant improvements were also observ
ed in key dimensions of self
-
regulated learning, including goal setting, resource management, and
motivation. It is concluded that AI tools constitute an effective resource for
enhancing autonomous English learning; however, their pedagogical
integration r
equires teacher guidance to prevent cognitive dependency.
Keywords
:
gamificat
ion, active, dynamics, teaching, performance
Resumen:
La integración de herramientas de inteligencia artificial (IA) en la
enseñanza y aprendizaje de idiomas ha experimentado un crecimiento sin
precedentes en los últimos años. El presente estudio examina el uso de
ChatGPT, Grammarly y DeepL como recursos de apoyo al aprendizaje
autónomo del inglés en estudiantes universitario
s ecuatorianos. Mediante un
diseño cuantitativo descriptivo con una muestra de 40 estudiantes de la
Universidad Agraria del Ecuador (UAE), se aplicó un cuestionario validado
para medir la frecuencia de uso, utilidad percibida y efecto sobre la
autonomía en
el aprendizaje. Los resultados muestran que el 90% de los
participantes utiliza ChatGPT con regularidad, seguido de Grammarly (80%)
y DeepL (70%), y que todas las herramientas presentan puntuaciones de
utilidad percibida superiores a 4.0 en una escala de
1 a 5. Asimismo, se
evidenció una mejora estadísticamente significativa en dimensiones clave
Cita
tion
:
Montero Bastidas, M. E.,
& Quintuña Barrera, L. V. (2026).
El uso de herramientas de
inteligencia artificial (ChatGPT,
Grammarly, DeepL) en el
aprendizaje au
todirigido del inglés
en universidades
ecuatorianas.
Multidisciplinary
Collaborative Journal
,
4
(2), 54
-
66.
https://doi.org/10.70881/mcj/v
4/n2/150
Rec
eived
:
0
6
/
03
/202
6
Revis
ed
:
1
8
/0
4
/2026
Ac
cepted
:
2
1
/0
4
/2026
Publi
shed
:
2
8
/0
4
/2026
Copyright:
© 2026
by the
authors. This article is an open
access article distributed under
the terms and conditions of the
Creative Commons License,
Attribution
-
NonCommercial 4.0
International (CC BY
-
NC)
.
(
https://creativecommons.org/lice
nses/by
-
nc/4.0/
)
Multidisciplinary Collaborative Journal
Multidisciplinary Collaborative Journal
| Vol.0
4
| Núm.0
2
|
Abr
–
Jun
| 202
6
| https://mcjournal.editorialdoso.com
55
del aprendizaje autorregulado, tales como el establecimiento de metas, la
gestión de recursos y la motivación. Se concluye que las herramientas de IA
constituyen u
n recurso eficaz para potenciar el aprendizaje autónomo del
inglés, aunque su integración pedagógica requiere orientación docente para
prevenir dependencia cognitiva.
Palabras claves:
gamificación
, activo, dinámicas, enseñanza, rendimiento
1. Introduction
T
he rapid expansion of artificial intelligence (AI) technologies has transformed multiple
domains of contemporary life, and education is no exception. In the field of English as a
Fo
reign Language (EFL), AI
-
powered tools such as ChatGPT, Grammarly, and DeepL
have emerged as widely accessible resources that learners can integrate into their
independent study routines outside the formal classroom (Abdullah, 2025; Aldulaijan &
Almalki, 2
025). These tools offer real
-
time feedback, translation support, writing
assistance, and interactive conversational practice features that align with the core
principles of self
-
directed learning (SDL), namely learner autonomy, goal setting, self
-
monitorin
g, and metacognitive regulation (Moorhouse et al., 2024; Van Wyk, 2025).
In the Ecuadorian higher education context, English proficiency remains a critical
academic and professional requirement, yet institutional resources and teaching hours
are often insu
fficient to meet learner needs (Fan et al., 2025; Farrokhnia et al., 2024). As
a result, students increasingly turn to AI tools as supplementary learning resources.
However, little empirical evidence exists regarding how these tools are used, how useful
st
udents perceive them to be, and to what extent they promote or hinder autonomous
learning behaviours (Habeb Al
-
Obaydi & Pikhart, 2025; Jadhav, 2026).
Research conducted in other geographic contexts has documented the benefits of AI
tools for EFL writing (A
bdullah, 2025; Kurt & Kurt, 2024), pronunciation (Hirschi et al.,
2025; Mompean, 2024), vocabulary acquisition (Sekitani et al., 2025), and speaking
practice (Sok & Shin, 2025). However, scholars have also raised concerns about the risk
of metacognitive la
ziness and over
-
reliance on AI
-
generated outputs (Fan et al., 2025),
as well as questions of academic integrity (Saarna, 2024) and the differential impact of
generative AI on learner motivation and self
-
regulation (Huang & Mizumoto, 2025).
Despite the
growing body of international literature, research examining AI tool use in
Latin American, and specifically Ecuadorian, university EFL contexts remains scarce.
Understanding how students in this region engage with AI tools is essential for designing
evide
nce
-
based pedagogical interventions that harness their benefits while mitigating
potential drawbacks (Farrokhnia et al., 2024; Yetkin, 2026).
The present study addresses this gap by investigating the frequency of use, perceived
usefulness, and self
-
directe
d learning impact of ChatGPT, Grammarly, and DeepL
among undergraduate students at the Universidad Agraria del Ecuador (UAE). The main
objective is to describe and analyse the ways in which these AI tools are integrated into
students' self
-
directed English
learning practices.
2. Materials and Methods
2.1. Research Design
A quantitative descriptive research design was adopted. This approach was selected
because it allows for the systematic measurement and description of AI tool usage
patterns and their assoc
iation with self
-
directed learning behaviours in a defined
Multidisciplinary Collaborative Journal
| Vol.0
4
| Núm.0
2
|
Abr
–
Jun
| 202
6
| https://mcjournal.editorialdoso.com
56
population (Van Wyk, 2025). The study was conducted during the 2024
–
2025 academic
year at the UAE Centro de Idiomas.
2.2. Participants
The sample comprised 40 undergraduate students (n = 40) enroll
ed in English language
courses at the B1 level at UAE. Participants ranged in age from 18 to 27 years (M = 21.3,
SD = 1.9). Purposive sampling was employed, selecting students who had prior
exposure to at least one AI tool in their English learning. Partic
ipation was voluntary, and
informed consent was obtained from all participants prior to data collection. Ethical
approval was granted by the UAE academic research committee.
2.3. Instrument
A structured questionnaire was designed and validated for this stu
dy. The instrument
consisted of three sections: (a) demographic and background information (5 items); (b)
AI tool usage frequency and perceived usefulness (15 items on a 5
-
point Likert scale);
and (c) self
-
directed learning behaviours adapted from establis
hed SDL frameworks
(Wolf & Suhan, 2025; Zou & Huang, 2024). Content validity was established through
expert review by three specialists in EFL methodology and educational technology.
Internal consistency was assessed using Cronbach's alpha (
α
= .87), indicating high
reliability.
2.4. Procedure
The questionnaire was administered online via Google Forms during regular class
sessions. Participants were instructed to respond based on their personal AI tool usage
habits over the previous three mont
hs. A pre
-
and post
-
assessment design was
employed to measure changes in self
-
directed learning dimensions before and after a
six
-
week AI
-
supported learning period in which participants received structured guidance
on how to use the tools effectively.
2.5.
Data Analysis
Descriptive statistics (frequencies, percentages, means, and standard deviations) were
computed using SPSS v.27. Paired
-
samples t
-
tests were conducted to compare pre
-
and
post
-
intervention SDL scores. Statistical significance was set at p <
.05. Qualitative data
from open
-
ended items were analysed using thematic analysis following established
procedures (Arifin et al., 2025).
3. Results
3.1. AI Tool Usage Frequency and Perceived Usefulness
All 40 participants (100%) reported using at least on
e AI tool for English learning
purposes outside the formal classroom setting, confirming that AI
-
assisted self
-
study has
become an established practice within this student population. Table 1 summarises the
frequency of use and perceived usefulness scores
for each tool across the full sample.
Multidisciplinary Collaborative Journal
| Vol.0
4
| Núm.0
2
|
Abr
–
Jun
| 202
6
| https://mcjournal.editorialdoso.com
æ
Table 1. Frequency of use and perceived usefulness of AI tools (n = 40)
AI Tool
n (%)
Frequency of Use
(Mean ± SD)
Perceived Usefulness
(Mean ± SD)
ChatGPT
36 (90%)
4.2 ± 0.8
4.5 ± 0.6
Grammarly
32 (80%)
3.9 ±
0.9
4.3 ± 0.7
DeepL
28 (70%)
3.7 ± 1.0
4.1 ± 0.8
Any AI tool
40 (100%)
4.0 ± 0.9
4.3 ± 0.7
Note.
Frequency of Use measured on a 5
-
point Likert scale (1 = never, 5 = always). Perceived Usefulness
measured on a 5
-
point Likert scale (1 = not at all
useful, 5 = extremely useful).
ChatGPT emerged as the most frequently used tool, with 36 participants (90%) reporting
regular use and a mean frequency score of M = 4.2 (SD = 0.8). Its perceived usefulness
rating was the highest of the three tools (M = 4.5,
SD = 0.6), indicating strong learner
satisfaction. Students reported using ChatGPT primarily for grammar explanation and
clarification (78%), essay drafting and revision (72%), vocabulary definition in context
(65%), and interactive question
-
and
-
answer pr
actice for exam preparation (54%). A
smaller but notable proportion (38%) indicated that they used ChatGPT to generate
model texts they subsequently analysed and adapted for their own writing assignments,
a practice consistent with findings reported by Ari
fin et al.
(2025) and Ozfidan et al.
(2024).
Grammarly was the second most widely adopted tool (80%, n = 32; M = 3.9, SD = 0.9;
perceived usefulness M = 4.3, SD = 0.7). Students described it as particularly valuable
for proofreading and error
identification before submission of academic writing tasks.
Many noted that the real
-
time, colour
-
coded feedback enabled them to understand the
type and source of each error rather than simply accepting corrections an affordance
also highlighted by Kurt an
d Kurt (2024) in their study on ChatGPT as an automated
feedback tool. Several participants specifically appreciated Grammarly's suggestions for
tone and clarity, which they perceived as complementary to grammatical correction
alone.
DeepL was used by 70%
of participants (n = 28; M = 3.7, SD = 1.0; perceived usefulness
M = 4.1, SD = 0.8). It was predominantly employed for word
-
level translation and
meaning disambiguation (80% of DeepL users), followed by sentence
-
level paraphrasing
(55%) and reading compreh
ension support when encountering unfamiliar texts in English
(48%). Some participants noted a preference for DeepL over other translation services
due to the perceived naturalness of its output, particularly for nuanced academic
vocabulary, which aligns wi
th observations by Habeb Al
-
Obaydi and Pikhart (2025) on
learner satisfaction with AI
-
powered language tools.
When asked about the order in which they typically used these tools during a self
-
study
session, 62% of participants described a sequential multi
-
tool workflow: they began by
using DeepL to understand unknown vocabulary or translate difficult passages, then
produced their own written output, employed Grammarly to review it, and finally used
ChatGPT to seek deeper explanations or to practise interact
ive tasks. This emergent
workflow suggests that students are developing relatively sophisticated AI
-
assisted study
routines, even in the absence of formal instruction on how to integrate these tools.
æ
3.2. Impact on Self
-
Directed Learning Dimensions
Table 2. Pre
-
and post
-
intervention self
-
directed lea
rning scores (n = 40)
SDL Dimension
Pre
-
int
ervention
(Mean ± SD)
P
ost
-
intervention
(Mean ± SD)
p
-
va
lue
Note.
Multidisciplinary Collaborative Journal
Multidisciplinary Collaborative Journal
| Vol.0
4
| Núm.0
2
|
Abr
–
Jun
| 202
6
| https://mcjournal.editorialdoso.com
59
3.3. Patterns of Use by Learning Skill Area
To provide a more granular description of how AI tools were deployed across different
language skill areas, participants were asked to indicate which skills they most frequently
practised using each tool. Respon
ses revealed a clear skill
-
tool alignment pattern.
ChatGPT was predominantly associated with writing (88% of ChatGPT users), grammar
(82%), and reading comprehension (60%), while speaking practice was reported by only
35% of users a finding that may reflec
t the text
-
based interface of ChatGPT, which does
not natively support synchronous oral interaction in the version accessed by most
participants.
Grammarly was almost exclusively associated with writing, as expected given its core
functionality (97% of Gra
mmarly users). However, 43% of Grammarly users also
indicated that reviewing error feedback contributed to their grammatical knowledge more
broadly, suggesting a transfer effect from corrective feedback to declarative language
knowledge, consistent with pa
tterns documented by Shin and Lee (2024). DeepL was
linked primarily to reading comprehension support and vocabulary building, with 60% of
DeepL users reporting that encountering words in translated context helped them retain
new vocabulary more effectivel
y than using a traditional monolingual dictionary.
Listening and speaking skills were the least supported by the three tools studied. Only
22% of participants reported using any of the three AI tools specifically to support
listening comprehension, and 18%
for speaking practice. This asymmetry between
written and oral skill support may partially explain persistent oral proficiency challenges
documented in EFL contexts across Latin America (Fan et al., 2025; Farrokhnia et al.,
2024). The limited use of AI fo
r oral skills also indicates an area for targeted pedagogical
intervention, particularly given the availability of AI
-
powered pronunciation and speaking
tools documented in recent research (Hirschi et al., 2025; Mompean, 2024).
3.4. Perceived Barriers and
Challenges
Despite the generally positive perceptions reported, participants also identified a range
of barriers to effective AI tool use. The most frequently cited challenge was language
proficiency itself (67%), with students noting that formulating effe
ctive prompts for
ChatGPT required a level of English competence that many B1
-
level learners found
demanding. This finding points to an inherent paradox in AI
-
assisted EFL learning: the
tools designed to support language acquisition may themselves require
a minimum
threshold of proficiency to be used productively (Jadhav, 2026; Pham, 2026).
Connectivity and access issues were the second most commonly reported barrier (55%),
particularly among students who commuted from peri
-
urban or rural areas surrounding
Guayaquil where internet access was unreliable. This finding echoes broader concerns
about digital equity in Ecuadorian higher education and suggests that the potential of AI
tools to democratise language learning support may be unevenly distributed across
socioeconomic strata (Farrokhnia et al., 2024).
A third barrier identified was uncertainty about the reliability of AI
-
generated content
(48%). Participants expressed concern about receiving incorrect grammar explanations
or culturally inappropriate trans
lations. The fact that nearly half of participants reported
this uncertainty also suggests a need for explicit instruction in AI literacy specifically, in
how to verify and critically evaluate AI outputs in the context of language learning
(Saarna, 2024; Y
etkin, 2026).
Multidisciplinary Collaborative Journal
Multidisciplinary Collaborative Journal
| Vol.0
4
| Núm.0
2
|
Abr
–
Jun
| 202
6
| https://mcjournal.editorialdoso.com
60
3.5. Student Perceptions: Qualitative Findings
Thematic analysis of open
-
ended responses yielded five overarching themes that
provide qualitative depth to the quantitative findings described above.
The first and most prevalent theme was acc
essibility and convenience. Participants
consistently described the 24/7 availability of AI tools as a transformative feature that
extended their learning beyond classroom hours and removed the social anxiety
associated with asking a teacher or peer for he
lp. Representative responses included
descriptions of using ChatGPT late at night before an exam, or turning to DeepL during
commutes to understand words encountered in English
-
language social media content.
The second theme, confidence and reduced anxiety
, was particularly prominent in
responses related to writing tasks. Multiple participants described a qualitative shift in
their willingness to attempt longer, more complex written productions after having access
to Grammarly, noting that the knowledge tha
t errors would be flagged and explained
reduced the inhibition typically associated with academic writing in English. This finding
aligns with broader research on writing apprehension in EFL contexts and the role of
feedback in reducing affective barriers
to written production (Abdullah, 2025; Kurt & Kurt,
2024).
The third theme, active learning versus passive completion, captured a tension that
several participants articulated explicitly. Students described two distinct modes of AI
use: an active mode in w
hich they engaged with AI feedback to understand patterns,
generate questions, and revise their own understanding; and a passive mode in which
they accepted AI
-
generated text or corrections without deeper engagement. Participants
who described the active m
ode tended to report higher confidence and perceived
learning gains, while those who acknowledged the passive mode expressed concern
about whether they were genuinely learning (Fan et al., 2025).
The fourth theme, trust calibration, described participants'
evolving understanding of
when to trust and when to question AI outputs. Several students noted that after receiving
instructor feedback that contradicted an AI explanation, they had begun to approach AI
outputs with greater scepticism. This developmental
trajectory from uncritical acceptance
to calibrated trust represents a key dimension of AI literacy that structured pedagogical
integration can accelerate (Farrokhnia et al., 2024; Saarna, 2024).
The fifth and final theme, social comparison and peer influ
ence, emerged from responses
describing how students became aware of AI tool use within their peer networks. Several
participants noted that learning about a classmate's use of DeepL or ChatGPT had
motivated them to try the tools themselves, suggesting tha
t peer modelling plays a role
in AI tool adoption that has received limited attention in the existing literature (Huang &
Mizumoto, 2025).
4. Discussion
4.1. AI Tool Adoption and Perceived Usefulness in the Ecuadorian EFL Context
The near
-
universal adoption of AI tools among the study participants (100% using at
least one tool; 90% using ChatGPT) situates the Ecuadorian university EFL context firmly
within global trends of AI tool diffusion in higher education. These adoption rates
parallel
or exceed those reported in comparable studies. Abdullah (2025) found similarly high
rates of ChatGPT adoption among EFL students in academic writing contexts, while
Aldulaijan and Almalki (2025) documented widespread generative AI use among
post
graduate students for a variety of learning tasks. The fact that comparably high rates
are now observed at an undergraduate B1
-
level population in a public Ecuadorian
Multidisciplinary Collaborative Journal
Multidisciplinary Collaborative Journal
| Vol.0
4
| Núm.0
2
|
Abr
–
Jun
| 202
6
| https://mcjournal.editorialdoso.com
61
university suggests that AI tool adoption in EFL contexts is no longer confined to
techno
logically privileged or advanced learner populations.
The high perceived usefulness scores (M > 4.0 for all three tools) are consistent with the
Technology Acceptance Model (TAM) prediction that perceived usefulness is the
strongest predictor of sustained
technology adoption (Aldulaijan & Almalki, 2025; Van
Wyk, 2025). Importantly, the usefulness ratings were not uniformly distributed across
task types: students rated ChatGPT most useful for grammar explanation and interactive
practice, Grammarly for correc
tive writing feedback, and DeepL for comprehension
support. This skill
-
specific utility differentiation indicates that students are developing
nuanced mental models of each tool's comparative affordances a form of tool literacy
that has direct implications
for how instructors might guide AI integration in curriculum
design.
4.2. AI Tools and Self
-
Directed Learning: Theoretical Implications
The statistically significant pre
-
to
-
post improvements across all four SDL dimensions
provide empirical support for the
theoretical argument that AI tools, when embedded
within a structured pedagogical framework, can scaffold the development of self
-
regulated learning behaviours in EFL contexts. This finding extends the work of Huang
and Mizumoto (2025), who demonstrated t
hat generative AI use positively influenced the
L2 motivational
self
-
system
, and complements research by Sok and Shin (2025) showing
that ChatGPT interaction tasks improved learner autonomy perceptions and
performance on summarisation tasks.
From a Self
-
De
termination Theory (SDT) perspective, the gains in motivation and
autonomy observed in this study may be partly explained by the way in which AI tools
satisfy basic psychological needs for competence and autonomy. The immediate, non
-
judgmental feedback pro
vided by tools like Grammarly addresses the need for
competence by making skill development visible and incremental, while the on
-
demand
availability of ChatGPT satisfies the need for autonomy by allowing learners to direct
their own inquiry without depend
ence on teacher availability (Wolf & Suhan, 2025). The
comparatively lower gains in self
-
monitoring relative to the other SDL dimensions may
reflect that the need for relatedness also central to SDT was less directly addressed by
the tools studied, suggest
ing an area for targeted instructional design.
The improvement in resource management is theoretically significant because it
suggests that guided AI integration can promote higher
-
order information literacy skills,
not merely surface
-
level tool use. When
students begin cross
-
referencing AI outputs with
other sources, they are engaging in the kind of critical source evaluation that underpins
academic literacy more broadly (Farrokhnia et al., 2024; Zou & Huang, 2024).
These
finding challenges
simplistic narr
atives that frame AI tools as inherently antithetical to
critical thinking, and supports instead the view that the pedagogical context in which tools
are introduced is the decisive variable in determining their cognitive outcomes.
4.3. Writing Development
and AI Feedback: Opportunities and Risks
The prominent role of writing in students' AI tool use with ChatGPT and Grammarly both
used predominantly for writing
-
related
tasks invites
detailed consideration of the
relationship between AI feedback and L2 writing development. Research by Kurt and
Kurt (2024) demonstrated that ChatGPT as an automated feedback tool improved L2
writing quality across multiple dimensions, including syntactic
complexity and lexical
diversity, while Shin and Lee (2024) explored the potential of ChatGPT as a rater of
second language writing, finding acceptable agreement with human rater judgements on
analytic scoring dimensions.
Multidisciplinary Collaborative Journal
Multidisciplinary Collaborative Journal
| Vol.0
4
| Núm.0
2
|
Abr
–
Jun
| 202
6
| https://mcjournal.editorialdoso.com
62
In the present study, students' d
escriptions of using Grammarly for proofreading and
ChatGPT for drafting assistance align with a scaffolded writing process model in which
AI tools support distinct phases of composition: pre
-
writing ideation, drafting, revision,
and editing. When students
engage with this process actively
reviewing feedback,
identifying recurring error patterns, and revising independently the potential for genuine
writing development is substantial. Arifin et al. (2025) found that Indonesian EFL students
who adopted a refl
ective, process
-
oriented approach to ChatGPT use in L2 writing
reported greater perceived learning gains than those who used the tool primarily for text
generation.
However, the passive completion mode identified in the qualitative data of the present
stud
y introduces a countervailing risk. When students accept AI
-
generated text without
engagement, they may produce improved written products while simultaneously
undermining the conditions for authentic skill development (Fan et al., 2025). Saarna
(2024) iden
tified precisely this dynamic in the analysis of ChatGPT
-
generated student
essays, noting that the absence of genuine linguistic struggle the productive difficulty
that consolidates new knowledge represents a hidden cost of frictionless AI assistance.
This
tension between immediate performance improvement and long
-
term proficiency
development constitutes one of the most pressing unresolved questions in AI
-
assisted
language learning pedagogy.
4.4. Cognitive Dependency and Metacognitive Laziness
The emergence
of cognitive dependency as a self
-
reported concern among participants
is theoretically consistent with Fan et al.'s (2025) construct of metacognitive laziness,
defined as the tendency to outsource effortful cognitive processing to AI tools rather than
eng
aging in the generative retrieval and elaboration processes that consolidate long
-
term learning. Fan et al. (2025) documented empirical evidence that high
-
frequency
generative AI use was associated with reduced metacognitive monitoring and lower
retention
of course content in controlled experimental conditions a finding that directly
parallels the dependency concerns voiced by participants in the present study.
Farrokhnia et al. (2024) similarly identified dependency and reduced critical thinking as
signifi
cant weaknesses in their SWOT analysis of ChatGPT for educational purposes,
noting that the very features that make ChatGPT attractive its fluency, responsiveness,
and apparent comprehensiveness are the same features that can discourage learners
from devel
oping independent problem
-
solving and linguistic reasoning capabilities. In the
EFL context, this risk is particularly salient because language learning requires not only
the accumulation of declarative knowledge about grammar and vocabulary, but also the
development of procedural fluency the automatic application of linguistic knowledge in
real
-
time communication
which AI tools cannot substitute for and may inadvertently
impede if they consistently remove the need for effortful practice (Pham, 2026; Sekita
ni
et al., 2025).
The five
-
theme qualitative structure emerging from this study and particularly the active
versus passive use distinction and the trust calibration trajectory
suggests that students
are not passive recipients of AI influence, but active ag
ents who develop increasingly
sophisticated relationships with AI tools over time. This developmental perspective
supports pedagogical approaches that treat AI literacy as a progressive competency to
be cultivated rather than a binary skill. Instructional
interventions that make the active
versus passive use distinction explicit, encourage metacognitive reflection on AI
interaction patterns, and provide structured opportunities for trust calibration are likely to
maximise the SDL benefits of AI tool integra
tion while mitigating dependency risks
(Yetkin, 2026).
Multidisciplinary Collaborative Journal
Multidisciplinary Collaborative Journal
| Vol.0
4
| Núm.0
2
|
Abr
–
Jun
| 202
6
| https://mcjournal.editorialdoso.com
63
4.5. The Digital Equity Dimension
The connectivity and access barriers reported by 55% of participants raise important
questions about the equity implications of AI tool integration in Ecuadorian highe
r
education. If AI tools function as significant enhancers of self
-
directed English learning
as the present findings suggest
then differential access to these tools based on
socioeconomic status, geographic location, or institutional infrastructure may con
stitute
a new axis of educational inequality that compounds existing disparities in English
language proficiency outcomes (Farrokhnia et al., 2024; Habeb Al
-
Obaydi & Pikhart,
2025).
University language centres and EFL programme coordinators in Ecuador shou
ld
consider how AI tool integration policies can be designed to avoid exacerbating existing
inequalities. Potential responses include providing offline
-
capable AI tool access via
institutional networks, offering structured in
-
class AI
-
assisted learning tim
e that does not
depend on home connectivity, and designing AI integration curricula that can be
implemented at varying levels of tool access without disadvantaging less connected
students. These equity considerations are not peripheral to the pedagogical q
uestion of
AI tool integration; they are central to any responsible institutional policy on the matter
(Jadhav, 2026; Yetkin, 2026).
4.6. Limitations and Future Research Directions
Several limitations of the present study merit acknowledgement. First, the
sample size
(n = 40), while adequate for a pilot descriptive study at a single institution, limits the
statistical power of the pre
-
post comparisons and constrains the generalisability of
findings to other Ecuadorian universities, proficiency levels, or di
sciplinary contexts.
Future research should employ larger, multi
-
institutional samples to enable more robust
inferential analyses and support cross
-
context comparisons (Aldulaijan & Almalki, 2025).
Second, the exclusive reliance on self
-
report data introdu
ces common method variance
and social desirability bias, particularly in responses related to dependency and passive
tool use. Students who engage in passive AI
-
assisted task completion may underreport
this behaviour due to perceived academic integrity nor
ms. Future studies should
triangulate self
-
report data with direct observations of AI
-
assisted study sessions,
analysis of chat interaction logs, and assessment of actual writing quality changes as
objective indicators of learning outcomes (Arifin et al.,
2025; Saarna, 2024).
Third, the six
-
week intervention period, while sufficient to detect statistically significant
SDL score changes, does not permit conclusions about the long
-
term sustainability of
the improvements observed. Longitudinal research trackin
g learner outcomes over full
academic years or across proficiency transitions would provide more informative
evidence about the enduring impact of structured AI tool integration on autonomous
English learning (Huang & Mizumoto, 2025; Sok & Shin, 2025).
Fin
ally, this study focused exclusively on three specific AI tools. The AI tool landscape is
evolving rapidly, and new applications including AI
-
powered pronunciation coaches
(Hirschi et al., 2025), adaptive vocabulary platforms, and multimodal conversation
p
artners are expanding the range of AI
-
assisted learning affordances available to EFL
learners. Comparative research examining how different tool configurations, integration
approaches, and learner profiles interact to shape SDL outcomes represents a
produc
tive and urgently needed direction for the field (Mompean, 2024; Zhang &
Umeanowai, 2025)
.
Multidisciplinary Collaborative Journal
| Vol.0
4
| Núm.0
2
|
Abr
–
Jun
| 202
6
| https://mcjournal.editorialdoso.com
64
5. Conclusions
This study provides empirical evidence that artificial intelligence tools
specifically
ChatGPT, Grammarly, and DeepL
are widely adopted and
highly valued by
undergraduate EFL students at the Universidad Agraria del Ecuador as resources for
self
-
directed English learning. The findings revealed high usage rates across all tools,
with ChatGPT being the most frequently used, and consistently stron
g perceived
usefulness scores. Importantly, statistically significant improvements were observed
across all dimensions of self
-
directed learning, including goal setting, resource
management, self
-
monitoring, and motivation. These results confirm that AI to
ols, when
integrated within a structured pedagogical framework, can effectively enhance learner
autonomy, engagement, and self
-
regulation in EFL contexts.
At the same time, the study highlights critical pedagogical considerations. While AI tools
offer subs
tantial benefits, the risk of cognitive dependency and passive learning
behaviors underscores the need for guided and reflective use. The findings suggest that
the effectiveness of AI in language learning depends not only on access to the tools but
also on
the instructional strategies that support their use. Therefore, integrating AI
literacy into EFL curricula, promoting active engagement with AI
-
generated feedback,
and ensuring equitable access to digital resources are essential steps for maximizing
their
educational potential. Future research should expand the scope of analysis through
larger samples, longitudinal designs, and the inclusion of additional variables such as
language anxiety and proficiency development.
Contributions
authors:
Conceptualizat
ion,
M
.
E
.
M
.
-
B
; methodology,
M
.
E
.
M
.
-
B
; formal
analysis,
L
.
V
.
Q
.
-
B
; investigation,
L
.
V
.
Q
.
-
B
; resources,
M
.
E
.
M
.
-
B
; original draft writing,
L
.
V
.
Q
.
-
B
; writing, revision, and editing,
M
.
E
.
M
.
-
B
; visualization,
L
.
V
.
Q
.
-
B
and
M
.
E
.
M
.
-
B
;
supervision,
M
.
E
.
M
.
-
B
. All
authors have read and accepted the published version of the
manuscript.
Funding:
This research has not received external funding.
Acknowledges:
The authors acknowledge the support of Universidad
Agraria del
Ecuador
and extend sincere thanks to all participating students and educators whose
commitment and engagement were fundamental to the successful completion of this
research.
Data availability statement:
The data are available upon
request from the
corresponding a
uthors
:
mmontero@uagraria.edu.ec
Conflict of interest:
The authors declare no conflict of interest
.
References
Abdullah, M.Y. Probing into EFL students’ perceptions about the impact of utilizing
AI
-
powered tools on their academic writing practices.
(GXF,QI7HFKQRO
30
, 21189
–
21220 (2025).
https://doi.org/10.1007/s10639
-
025
-
13601
-
w
Aldulaijan, A. T., & Almalki, S. M. (2025). The impact of
generative AI tools on
postgraduate students’ learning experiences: New insights into usage patterns.
Journal of Information Technology Education: Research, 24, Article 3.
https://doi.org/10.28945/5428
Arifin, M. A., Rahman, A. A., Balla, A., Susanto, A. K., & Pratiwi, A. C. (2025).
ChatGPT
Affordances and Indonesian EFL Students’ Perceptions in L2 Writing: A
Multidisciplinary Collaborative Journal
| Vol.0
4
| Núm.0
2
|
Abr
–
Jun
| 202
6
| https://mcjournal.editorialdoso.com
66
Sok, S. and Shin, H.W. (2025), Do Interactions with ChatGPT Influence L2 Learners'
Oral Speaking Ab
ility, Summarization Ability, and Perceptions of Generative AI
Tasks?. TESOL J, 59: S19
-
S51.
https://doi.org/10.1002/tesq.70001
Shin, D., Lee, J.H. Exploratory study on the potential of ChatGPT as a rater
of second
language writing.
Educ Inf Technol
29
, 24735
–
24757 (2024).
https://doi.org/10.1007/s10639
-
024
-
12817
-
6
Van Wyk, M. M. (2025). Student Teachers’ Leveraging GenAI Tools for Academic
Writin
g, Design, and Prompting in an ODeL Course. Open Praxis, 17(1), pp. 95
–
107.
DOI: 10.55982/openpraxis.17.1.711
Wolf, M. K., & Suhan, M. (2025). Language assessment and learning through AI
technology: An exploratory study on using GPT for young EFL learners’ writing.
Language Teaching Research Quarterly, 50, 77
–
100.
https://doi.org/10.32038/ltrq.2025.50.07
Yetkin, R.2026. “Redefining Teacher
–
Technology Relationships: AI
-
Driven Platforms in
EFL Classrooms Through the Eyes of Pre
-
Service Teachers.” European Journal
of Education61, no. 1: e70453.
https://doi.org/10.1111/ejed.70453
.
Zou, M., Huang, L. The impact of ChatGPT on L2 writing and expected responses: Voice
from doctoral students.
Educ Inf Technol
29
, 13201
–
13219 (2024).
https://doi.org/10.1007/s10639
-
023
-
12397
-
x
Zhang, X., Umeanowai, K.O. Exploring the transformative influence of artificial
intelligence in EFL context: A comprehensive bibliometric analysis.
Educ Inf
Technol
30
, 3183
–
3198 (2025).
https://doi.org/10.1007/s10639
-
024
-
12937
-
z