How can we ensure the use of Generative-AI for Qualitative Research aligns with our values as Social Researchers?
- Christina Silver
- 1 day ago
- 10 min read
Updated: 1 hour ago
On Thursday 21st August 2025 I was invited to give the plenary talk to open a debate about Generative-AI for qualitative research at the Summer School in Social Science Methods at the Università della Svizzera italiana in Lugano, Switzerland.
All week I'd been teaching my course on Integrating AI into the qualitative workflow, and had been enjoying the discussions with my students who were engaged, thoughtful, and prepared to critically engage with when AI is not appropriate for aspects of the qualitative workflow, as well as when, why and how it might be.
We'd spent a lot of time talking about technological capabilities, their methodological implications and ethical questions, as is always the case when I teach these topics. But this talk aimed to promote broader, more fundamental discussions about what it means to do social research, and be a social researcher.
Below you can read my talk, along with selected slides. It was followed by reflections and counter-arguments by Giovanni Colavizza and Bernard Kittel, and questions from the audience (of circa 85 in-person and 30 online attendees), and the session was chaired by Benedetto Lepori, one of the Summer School's organisers.
If you prefer to listen or watch this - head on over to my audio (Spotify) or video (YouTube) podcast. However you engage with this, I welcome your thoughts and constructive comments/criticisms to further the debate.

So the title of my piece this afternoon is: How can we ensure the use of Generative-AI for Qualitative Research aligns with our values as Social Researchers?
To address this question, we need to consider what our values as social researchers ARE in the context of qualitative work
But first, what is social research? There are many definitions, but this one that comes from the UK based Social Research Association, which is a registered charity that since 1978 has aimed to promote excellence in social research working with universities, government, research agencies, charities, and individual consultancies. They say that:
“Social research helps us to understand public opinion, attitudes and behaviour. It uses tried and tested methods that give reliable findings. It provides evidence that government, public bodies, charities and other organisations need to develop policies and make decisions”.
Let’s unpack this a little bit in terms of values. So first of all, an aspect of the SOCIAL in social research relates to UNDERSTANDING (and in the context of qualitative research, what Max Weber, using the German term ‘verstehen’ emphasised in analysing meaning in social action – here in the SRA's definition, that is constructed as relating to public opinion, attitudes and behaviours)

We can consider UNDERSTANDING as a value of social research. Social Research of course includes more than only Qualitative work, encapsulating many methodologies. But understanding as a value of social research fundamentally aligns with qualitative research, with its focus on meaning, interpretation, and context.
In the latter part of the definition, the emphasis of enabling different kinds of organisations to develop policies & make decisions clearly indicates that a purpose and outcome of social research is that it has IMPACT – impact on humans. In other words, we can think of the impact of social research as a source of its value.

Therefore as social researchers we are concerned with the impact of the research we do. The SO WHAT of our work.
How we do this – and what has been a core focus of this summer school, reflected in the middle part of the definition, is that we endeavour, whether we’re academics, government or applied researchers, to do research with, about, and for humans, in ROBUST ways.

Both ETHICS and METHODOLOGY are therefore at the cornerstone of what it means to be a social researcher, and what is involved in doing social research.
So let’s revisit the question I am posing today: How can we ensure the use of Generative IA for qualitative research aligns with our values as social researchers?
The HOW at the beginning of this question implies there is an ability to ensure that the use of GenAI for Qualitative Research aligns with our values as social researchers
I’m no linguist, I’m sure there are others in the room today, who can better explain the functioning of “how” as an adverb in this sentence.
For me, as a qualitative methodologist and social researcher, rather than a linguist, the HOW suggests that it is actually possible to ensure the use of GenAI aligns with our values as social researchers
But today, I want to pause, and question this
And so, I want to consider the different question that arises when we remove the “how” – so that we ask instead, CAN we ensure the use of Generative-AI for Qualitative Research aligns with our values as Social Researchers?
I believe this is an important, a fundamental actually, starting point for discussions about the infiltration of GenAI into qualitative spaces. Because the basis of informed choices, should in fact, start with questioning the whole premise of why it is that we are even considering using GenAI for qualitative research. Today I want to discuss this in the context of what it means to be a social researcher, and to do social research.
You might find this a bizarre statement considering I’ve spent this week facilitating a course on harnessing GenAI for qualitative research. You may be wondering WHY I am suggesting we should question the premise of why we are considering using GenAI for qualitative research?
There are many reasons.
Firstly, because for the first time in the history of the field of computer-assisted qualitative data analysis, the whole community at large is faced with new tools having been foisted upon us, without us asking for them, that we are now trying to work out a use for.
When groups of researchers and computer scientists first began developing software to manage qualitative materials and facilitate the ‘messy’ process of qualitative analysis, back in the 1980s, they did so to try and solve an identified problem. In other words, tools were developed to facilitate or solve a specific need. Initially those needs were largely practical, data management needs, later more analytic needs were addressed through the development of tools.
Until recently, this has been the case throughout the CAQDAS field. As each new product entered the field, its developers expressed the gap they were filling, or an unmet need they were addressing.
Then in November 2022, when ChatGPT was released, suddenly a new technological capability was thrust upon us, and qualitative researchers and developers alike began scrabbling around to work out how to use it. This is a major departure from how the relationship between methods and tools has been enacted in the field previously.
Although I’m pragmatic about the fact that this is the situation we find ourselves in, and I do believe that technology can inform methods, I’m not convinced this direction of action is the best way to develop tools or foster methodological innovation.

Tools are powerful. I love tools. But we mustn’t use them in a methodological vacuum, and it worries me that methods are being forgotten in the race to harness GenAI tools for the fear of missing out.
Methods are fundamental to how we engage with data, how we scaffold our analytic practice, how we communicate and illustrate the rigour in our process which is what ensures findings are reliable and meaningful.
And in our class this week, we've spent a lot of time reflecting on the methodological implications of using GenAI tools for different phases of the qualitative research workflow.
But this evening I want to primarily talk about our values as social researchers, and whether the use of GenAI aligns with those values in the context of qualitative research
For the purposes of this afternoon, I’m going to focus from now on, on values in terms of impacts – suggesting that inherently, in this context, the SOCIAL in social research is the aim to do social good.
If this is an aspect of what drives us, can we justify the use of GenAI when we know about, AND TAKE ACCOUNT OF, the impacts of our use of GenAI?
Does it sit comfortably with us, as social researchers aiming to do social good, when we uncover and reflect on the social bad that the development of LLMs and the impact of their use is doing?
Let’s just consider 4 aspects, for now…
First of all, the behaviour of the big tech companies who scape the internet, grabbing everyone’s data without permission, or providing recompense to those who generated the material in the first place. The artists, the musicians, the authors, the scholars….
Does it fit with our values as social researchers to use models we know were developed in this way?
Relatedly is the exploitation of and effect on the humans that are involved fine tuning AI models.
When we know about their employment conditions and the nature of the content they have to sift through, does it sit well with us as social researchers?
Then there is the impact the running and use of AI models has on the environment
Escalating energy demands and water use that are required to run the massive data centres, thus compromising communities, farming, and biodiversity. What effect this new demand has on efforts to mitigate the global climate and biodiversity crisis. And the unequal distribution of those environmental consequences
Does it fit with our values as social researchers to use models we know have such an effect on the environment and communities – and likely not the communities who are predominately using the technologies?
And then, the implications on humans who become deeply involved with, building relationships with AI models. How do we feel when we hear stories of AI models telling humans to harm themselves.
Do we really want to use models when we know this is what happens in some cases when humans engage with them?
The exertion of dominance by powerful tech companies and countries through technological control via e.g. data and resource extraction, labour exploitation etc is what is referred to as techno-colonialism.
Does this sit comfortably with us as social researchers?
These three books are well worth a read for critical perspectives into the broader socio-political implications of the use of GenAI which as social researchers, we cannot ignore.

e
As a result of such issues, many Qualitative Researchers flat out refuse to use GenAI - they are what Virginia Braun, famous for her work with Victoria Clarke on elucidating and popularising Reflexive Thematic Analysis, calls ‘conscientious objectors’.

If you’re interested in these topics, then I’d highly recommend you watch this talk she gave recently

And also, episode 10 of my podcast, is with Janet Salmons, another eminent methodologist who has written extensively on qualitative methods, and doing research online, who takes a very strong objection to the use of GenAI by scholars on the basis of the ethical issues, most notably the issue of stolen data and what it means to be a scholar with integrity
Are we being hypocritical in even contemplating the use of GenAI for qualitative research, let alone using it? This is a question that I’d like to raise this evening, to hear your thoughts on

But before opening up for discussion on this and any other points you’d like to bring to the table, you may be wondering how I handle this question myself
So, the first thing to say is that I am a pragmatist. Before I began to learn about the issues raised earlier, my initial reaction, from a methodological perspective, was a mix of scepticism, concern and intrigue
• scepticism about whether these tools could actually do what it was being claimed they could
• concern about what these technologies would mean for the field of qualitative research
• intrigue about how researchers would react to the capabilities and how developers would adopt them
But it was clear I not only needed to get to grips with what was happening and how it might impact the field, but that it was my responsibility to do so.
The emergence of GenAI changes the qualitative research space whether researchers, methodologists or teachers like it or not. We must critically engage with the role and implications of the tools we use. We must be open to and engage with, others’ opinions and perspectives.
Debates like this one are an important part of that. There are strong feelings on both sides of the argument. As well as the conscientious objectors, there are the enthusiastic adopters. I have no doubt there are a range of views in the room here today.

As a community with the common aim of doing good with our social research, despite the diversity in our methods and perspectives, we must come together to debate the issues in open, collegiate and meaningful ways.
We should call out the charlatans, absolutely. But to demonise those who are using Generative-AI, without being open to understanding what they’re doing, why they’re doing it, and how, belies a form of dogmatism that is unhelpful.
If you want to know more about my views on these topics, check out this post I wrote just before the summer break on my blog
Also there is a lot of relevant information on the QualAI pages on the CAQDAS Networking Project website where we are building a repository of useful resources on topics related to the use of GenAI for QDA with the community
So that's the script of my talk. Afterwards, two discussants - Giovanni Colavizza and Bernard Kittel - were asked to respond and then the debate was opened up to questions from the floor. Most of their comments oriented around the technicalities of how LLMs work, the pressures on young social scientists to use AI to gain competitive advantage, the futility of banning AI, the choices social scientists have to use AI for their work, the grey arear around what is considered fair use, and suggestions that if renewable energy is used to run datacentres it's not an environmental issue.

There were what I thought were some great questions from the audience, which directly spoke to the issues I raised, including one questioning the framing Generative-AI as a neutral tool that we have collective control over, and another that asked what it will mean in the future to be a social scientist interested in studying human relations if it becomes impossible to distinguish between human-generated and AI-generated content.
All in all it was an interesting evening and great to hear and discuss a range of perspectives.