Monday - Friday8AM - 9PM
OfficesSuite 8, Ta' Mallia Buildings Triq in-Negozju, Mriehel Birkirkara, BKR 3000, Malta
Visit our social pages

Technology-Facilitated Harms and Internet Safety in the Age of AI

February 9, 2026by Lorleen Farrugia0

The year of 2025 has seen a drastic rise in the reports of online CSAM – Child Sexual Abuse Material and Technology-facilitated child sexual exploitation andabuse (TF-CSEA) attributed to the widespread influence of AI within the online space. This rapidly changing landscape reflects a need for adaptation in the field of safeguarding and child safety as AI makes CSAM more widely available, accessible, and sophisticated. As we at A1 Research reflect on the current situation during this Safer Internet Day, the question of how the internet can actually be made safer for children and teens comes to mind. 

Reports by the Internet Watch Foundation indicate that 2025 has been the worst year for AI-generated CSAM, with a 26,362% rise in photo-realistic AI videos of child sexual abuse – 3,440 compared to 13 in 2024, the majority involving real victims. Furthermore, an anonymous poll hosted within child predator forums by internal moderators found that almost 80% of respondents have used or plan to use GenAI for harmful child sexual exploitative purposes. Increasingly, AI stable diffusion models have only gotten more sophisticated, realistic and easy to access, making it all the more simple for those who seek to create CSAM online. Furthermore, many websites and AI chatbots even on the clear web do not have proper safeguards to prevent children from encountering AI-generated CSAM – as seen with the recent controversies surrounding Meta and X’s Grok AI’s interactions with children.

the word safety spelled out in fridge magnets being placed by the hand of a child

This proliferation of CSAM online due to AI poses obvious risks to the children involved, as well as individuals in the child protection spaces. The sheer amount of AI-generated CSAM risks muddying the water: making it much more difficult to distinguish real CSAM from computer-generated items, adding extra load to law enforcement and NGOs, derailing investigation and prosecution efforts, notwithstanding the increased mental health toll faced by professionals dealing with this influx of material. The investigators also have the added challenge of determining whether the victim in the scenario is in fact a real person. Children may also be victimised or re-victimised without their knowledge, as more CSAM matching the likeness of the child in the original material can easily be generated. Finally, cases are already being reported where AI generative technologies facilitate the grooming and sexual extortion of minor victims creating explicit imagery to extort new victims who have not shared sensitive imagery or being used for full scale sextortion operations. This victimisation and re-victimisation does not occur only through interaction with strangers – beginning in mid-2023, male students at several U.S. middle and high schools reportedly used AI to make deepfake nudes of their female classmates through the use of nudify apps. 

Attempting to protect children online from such dangers means toeing the line between over- and under-protection. Children and adolescents use the internet as a digital playground; crossing borders, exploring different cultures, making friends and having fun. Across different countries, research has shown that the majority of children and teens do not consider themselves unsafe online, and another study showed that even fewer consider the possibility of OSCEA and online CSAM a risk. Children also typically viewed the use of AI in a positive light. This makes it even more essential to increase awareness and education of online issues, however this should not come at the expense of limiting the exploration and social development of children online. Hardline policing, intensive security and blocking mechanisms, surveillance and tracking of children’s online activity by caregivers have been found to increase feelings of fear and oppression, as well as diminishing trust and the relationship between caregivers and children. Furthermore, it risks turning illicit websites and games into ‘forbidden fruits’, encouraging children to find ways of overriding safeguards and hiding their activity – and possible harms they encounter – from well-meaning guardians and caregivers. 

This makes the issue of tackling online harms a less than easy thing to traverse. Evidence consistently shows that the most effective responses to online harm are those grounded in research and shaped by children’s own experiences. Rather than relying on rigid monitoring or purely technical controls, safer online environments are built through approaches that prioritise dialogue, mediation and meaningful participation. Creating regular opportunities for children to share how they use digital spaces, what feels safe or unsafe, and what support they value allows adults to respond in ways that are informed by credible, child-centred, and proportionate.

Research-led practice also highlights the importance of age-appropriate support: from nurturing open conversations and trusted networks for younger users, to empowering older adolescents with knowledge of peer-supported and moderated reporting channels. By listening closely to children’s voices and continually testing what works through evidence and insight, we can design solutions that reflect how young people actually live, learn and connect online.

At A1 Research, we believe this moment calls for collective action. Educators and caregivers must be better supported with the tools, evidence and confidence to address digital risks alongside the children in their care, not on their behalf. At the same time, we must continue to work with and advocate towards technology companies to advance greater transparency, child-centred design and responsible moderation of AI systems. Creating a digital environment where children can truly thrive means moving beyond reactive surveillance and committing to proactive, research-informed empowerment. As AI rapidly evolves, so too must our shared responsibility to listen to children’s voices, invest in education, and strengthen the trusted relationships that remain our most powerful safeguard.

Lorleen Farrugia

Leave a Reply

Your email address will not be published. Required fields are marked *

A1-RESEARCHHeadquarters
A1 Research is a company headquartered in Malta with reach in the UK and EU through a nework of Associated Consultants
OUR LOCATIONSWhere to find us?
A1 Location Malta
GET IN TOUCHA1 Research on Linkedin
Follow us on LinkedIn for a stream of online updates on our most recent projects
A1-RESEARCHHeadquarters
A1 Research is a company headquartered in Malta with reach in the UK and EU through a nework of Associated Consultants
OUR LOCATIONSWhere to find us?
A1 Location Malta
GET IN TOUCHA1 Research on Linkedin
Follow us on LinkedIn for a stream of online updates on our most recent projects
en_GBEnglish