top of page

Are we losing our (qualitative) minds if we use Generative AI?

On the 17th March I was invited to give a keynote at the MAXDAYS 2026 Europe virtual conference.  I was grateful for the opportunity to talk about the relationship between methodology and technology, which is something I’ve spent much of my career thinking about . Here's what I said. Keep an eye out for the link to the recording once it's available.


 

Image Source: Bart Fish & Power Tools of AI / https://betterimagesofai.org  / https://creativecommons.org/licenses/by/4.0/ 
Image Source: Bart Fish & Power Tools of AI / https://betterimagesofai.org  / https://creativecommons.org/licenses/by/4.0/ 

To get straight to the point, the title of my talk asks the question: are we losing our (qualitative) minds? and of course that relates to the impact of Generative AI on the field of qualitative data analysis.






The reason ‘qualitative’ is in parenthesis, the reason is because there are two aspects of this question

  • whether we're generally losing our minds to Generative AI, which speaks to questions about whether our use of it makes us less critical, less thoughtful, less inclined to question and think for ourselves

  • and whether in the rush to use Generative AI we are losing what it means to work qualitatively, which speaks to questions of whether Generative AI can actually can do what we need it to do as qualitative analysts

 

We might be, but we needn't

My short answer to this question is: we might be, but we needn’t.

  • we do not have to lose our minds in either of the ways I mentioned

  • our use of Generative AI does not have to mean we do not engage in critical thinking, reflexivity or interpretation as humans or as qualitative researchers, IF we think about how we are using it

  • Generative AI does not take those fundamental tasks away from us, BECAUSE it can’t do them


However, it’s up to us – in the way we choose to use Generative-AI, to ensure this does not happen. We have agency over which tools we choose to use, when we choose to use them, and how. And of course, we also have responsibilities for the outputs we generate – whatever tools we use


If you’re familiar with my work over the past 10 years or so, you’ll know that I emphasise the importance of analytic strategies driving the way we use tools in qualitative work. In other words the use tools to enact our methods, rather tools being the architect of methods. This is what the CAQDAS pedagogy I developed with Nick Woolf, the Five-Level QDA method, emphasises.


And I argue that in light of Generative AI, this way of thinking about qualitative analysis methods and our use of tools is more important than ever.


Methodological choices are always contextualised by ethical ones

 

Photograph taken of a badge included in the delegate pack of the Inaugural International Creative Research Methods Conference, organised by Helen Kara, in Manchester, September 2023
Photograph taken of a badge included in the delegate pack of the Inaugural International Creative Research Methods Conference, organised by Helen Kara, in Manchester, September 2023

But before I go any further, it’s important to make a brief statement about ethics.


In this talk, I’m focusing on methodological implications, but it’s important to remember that methodological choices are always contextualised by ethical ones.


This badge was in the delegate pack of the inaugural International Creative Methods Conference that took place in September 2023 and of course it is true that ethics are everywhere.



Having spoken to thousands of researchers about Generative AI and qualitative analysis is the past few years, I’ve observed most to be concerned with ethical issues surrounding the use of Generative AI from a research integrity point of view, often prioritising data privacy and security, issues of bias and so on.


These of course are incredibly important, and the developers of CAQDAS packages take these issues seriously.


But the broader ethical context concerning the development of Large Language Models in terms of how they are developed and by whom, including issues of data provenance and Intellectual Property, and the socio-environmental and geo-political consequences of their use, are also critically important.


These issues, along with the methodological ones I’m focusing on today, frame how researchers, developers, research organisations and governments, have - and are - responding to the rise of Generative AI – whether explicitly or implicitly.



Indeed, many qualitative researchers decide not to use Generative AI at all as a result of these broader socio-political and environmental issues.

 

My thinking about and use of Generative AI for Qualitative analysis and other purposes, is certainly framed by them. I have many concerns about the ethics, and those concerns influence the ways that I will – and will not – use Generative AI.


In my role as educator and awareness-raiser about computer-assisted qualitative analysis, I have chosen to learn about how Generative-AI tools work, what they can and cannot do, in order to be able to contribute to methodological and pedagogical debates.


I see this as my responsibility as a teacher of qualitative methods, but I never talk about these tools without raising the ethical backdrop, and there are certain uses that I will not put Generative AI to as a result, that I have explained elsewhere.


 

But this talk focuses on some of the methodological implications.


Generative AI is not something we asked for, neither as qualitative researchers, nor as humans going about our everyday lives.

But its pervasiveness means that we do have to grapple with what it means, and in the context of qualitative analysis, whether, how and when to use it. One reason this causes methodological uncertainty is because Generative AI was not developed for our purposes.

Yet, as we know, its capabilities can and are being harnessed for qualitative research, packaged into tools that we can use for those purposes. There are many advocates. Early adopters of Generative AI who see its value and promote its use, some of whom are developing their own tools that harness these capabilities.


And as I’ve already mentioned, there are the ‘conscientious objectors’ – those who will not use Generative Ai on principle (Jowsey et al, 2025). In addition are the moderately sceptical, those who are worried about what the potential use of Generative AI means for the qualitative research professions and who are dubious about its capabilities and implications.





Many more researchers are unsure about what all of this means, and are grappling with navigating the options and the fast-paced developments, and myriad of implications for their work.

 






How we think about Generative AI influences how we engage with and use it, and this is where theories about the role of technology come in.

 

 

  • Technological determinism relates to the idea that technological development is inevitable, and that humans therefore must adopt to it. This way of thinking thus positions humans as somewhat or wholly passive in the trajectory of development, responding to and adopting new technologies because they exist.

  • Technological instrumentalism, in contrast, emphasizes that humans are in control of technology, that we mould it to our needs, and thus technology is viewed as more neutral or passive, with humans as the driving force.

  • Technological reflexivity, which Trena Paulus and Jessica Lester emphasize in their extensive writings in this space, stresses that the choice and use of tools always has consequences – and that therefore researchers must consider the implications of these choices and uses in planning for and using tools.

 

These theories can help us understand the different responses to Generative AI and its role in the qualitative research and analysis process. For example, the rush to adopt it, and particularly assertions that established ways of working, analytic techniques, and methods are now redundant simply because there are different ways of working, reflects a more deterministic position - the idea that Generative AI exists, therefore we must use it. This is the response that seeks new ways of doing qualitative analysis simply because there is new technology – it’s here so we should use it.


The plethora of new tools that have come onto the scene in the last few years are examples of this way of thinking, as are new proposed quasi-methods that harness those technologies in a methodological vacuum. Sometimes these tools and positions are rationalised by claims that our established analytic techniques and methods are inadequate in some sense. 


It also helps explain the focus on thematic analysis and the generation of themes in many discussions about the potential for Generative AI for qualitative analysis. Because Large Language Models can look across large volumes of text and identify common patterns lends it to being used for such purposes.


Those of you who have been in this field as long as I have, will remember concerns expressed in the early 1990s that the rise of qualitative software was homogenising methods. I never ascribed to that view then, but now we are seeing some homogenisation in discussions about themes and thematic analysis in the context of generative AI.


However, I question whether it is actually themes that Generative AI is capable of finding in qualitative data. Researchers across many methodological contexts have specific – and varied – conceptualisations of what themes are, how they are constructed, and what their role is in analytic method. The capability of Generative AI to identify commonality across large volumes of data does not make this a form of theme development. Semantics is important here.


And we must remember also that thematic analysis is only one of many analytic methods in the qualitative space. Methodological discussion has been somewhat hijacked by the apparent capabilities in this regard, but we should question what is actually being generated here, and how it relates to method.

 

On one level we do of course have to respond to technological developments, because they are here, whether we like it or not.

It is how we respond which reveals perspectives, interests, values and priorities. I do not believe that we should be using Generative AI just because it is here. Nor that we should be scrabbling around finding a use for it, just because it is here. Part of the issue in my view is that in the rush to harness new tools, methods are being forgotten, lost, side lined, flattened, homogenised, or the tools are being promoted as methods in themselves.

 

This is why thinking about methods and tools that emphasizes the agency and responsibility of humans in relation to tools comes into play

Many researchers – and developers – have responded by experimenting with Generative AI capabilities to see if they can, in fact, do aspects of our work better than we can, or contribute to our analytic practice. And where they think it can, they incorporate it into their workflow, but maintain control and responsibility for other aspects of the process that they deem Generative AI cannot do – either at all or well enough.


Here the emphasis is on considering how the use of Generative AI may enable us to enact our methods more efficiently or to a higher quality. This more instrumentalist approach is about exploring possibilities and integrating them where they are deemed to be methodologically appropriate, rather than adopting them just because they exist. Those that have taken this approach do not necessarily see technology as passive or neutral, but they do believe that humans have the capacity and responsibility to control how technology is used, even if as individuals we do not have the capacity to control its development more broadly.


The Five-level way of thinking that I mentioned earlier and will discuss a bit more shortly, emphasizes harnessing tools for methodological needs. In the language of this way of thinking, our analytic methods drive the way we use tools, rather than tools driving or being the architect of methods.

 

 

Yet Trena Paulus and Jessica Lester, in describing technological reflexivity, remind us that whilst “at times methods should and do drive the use of technologies, at other times, available digital tools and spaces may actually change the methods we use” (2023, p2)


It is clear that generative ai tools are changing the way some qualitative researchers are thinking about analysis, and how many are enacting it.

 

Theories of technological determinism, instrumentalism and reflexivity are not entirely mutually exclusive. Their edges are blurred, and they each have value in contributing to our understanding of what’s going on in the qualitative research field right now. 


For me, enacting technological reflexivity is what we do when we consider whether, when and how to use Generative AI, and instrumentalist perspectives underline our responsibilities to harness tools appropriately


And this is why the Five-level QDA way of thinking matters now more than ever.

It encapsulates the belief that humans harness technology to enact methods – strategies drive tactics. This does not mean that new tools cannot and should not inform methods, but that the emphasis of the directionality is from strategies to tactics, not vice versa.



To let new tools become methods is to veer too much into deterministic ways of thinking.


One of the things that concerns me most, is that in the rush to harness new tools, methods are being forgotten, or tools are being discussed and used as if they are methods

As you can see in the diagram below, the directionality in the Five-Level QDA way of thinking goes from strategies – comprising research objectives and an analysis plan – to the choice and use of software tools. Iteration happens along the way, such that traversing the loop is not always linear, and the availability of tools - new tools including Generative AI – can inform methods, so there is iteration at the point where tools are chosen and harnessed, but to let tools drive methods is not something that I have ever observed ending well.



Adapted from Woolf & Silver's Five-Level QDA method
Adapted from Woolf & Silver's Five-Level QDA method

 

In my own practice, this way of thinking reinforces my ability and need to think critically about the choices I make about tools – to enact technological reflexivity. It is what ensures how I use tools is always grounded in methods


because after all, the decades worth of work qualitative methodologists have spent on developing, testing, debating and refining methods, doesn’t just disappear in a poof of magic dust just because there are new capabilities developed by big tech companies

I do believe that new technological capabilities can – and do – provide new possibilities, but the existence of new tools is not in itself a reason to use them. I use tools in the service of methods, rather than as the architect of them. Just because something is possible doesn’t mean it’s the appropriate thing to do.


So I ask you to consider what’s the point of analytic methods in your practice? What’s the point of tools for you? Your answer will guide what is appropriate within your qualitative workflow.


I believe methods are developed to enable us to accomplish our analytic objectives, to provide rigorous and documentable scaffolds to answer our research questions. And this is why technological reflexivity is an essential concept.



Image Source: Jamillah Knowles & Digit / https://betterimagesofai.org  / https://creativecommons.org/licenses/by/4.0 /
Image Source: Jamillah Knowles & Digit / https://betterimagesofai.org  / https://creativecommons.org/licenses/by/4.0 /
But it doesn’t have to be either / or, it can be with / and





This speaks to what has become a common contrast about the role of Generative AI, namely whether we seek for it to replace our work - meaning we outsource something we previously did ourselves, to generative AI. Or if we use generative AI to assist us, whilst we remain always and totally in control of when and how to use it and what to do with the responses to our prompts that it generates. This position reflects the more instrumentalist view.


We do not have to make a choice between doing the whole of an analysis using Generative AI, or not using it at all. Of course, we can make such a choice, but we can also use Generative AI at particular moments, and for certain tasks within one project, but not for others, and we can make different choices in the next project. Because each project has its own idiosyncratic characteristics and needs, and so what is appropriate for this project, might not be appropriate for the next. Just like what is appropriate for one analytic method, is not for another.


In this sense, there is no one-size-fits-all approach to what appropriate use of generative ai for qualitative analysis looks like.

 

So how do we decide how to make choices about what is and is not appropriate for a given project? In other words, how to we enact techno-methodological reflexivity when considering generative ai?

 

Over the past few years, I’ve developed a framework that I use when discussing these topics, that involves asking why, when, how, what and does questions, not just once when planning an AI-assisted analysis, but continually throughout a project as each analytic task is designed and enacted


The question I began with today was whether we are losing our qualitative minds through the use of Generative AI. If we reflect continually on the questions on this slide, then I believe that we do not need to. Because through doing so we can make informed choices about whether and how we use tools.




We need to consider their role, their capabilities and decide on a case-by-case basis whether and how to use them. We can in this way, if we want to, take control of the tools that are available to us. This includes choosing not to use them as well as choosing to use them for certain tasks, but not others, or for some types of data but not for others, or for certain projects, but not for others.

 

Can Generative AI do critical thinking, interpretation and reflexivity? Essentially, I think not. But that doesn’t mean we cannot use Generative AI tools to inform and contribute to our enactment of those practices

 

 

How then, has MAXQDA responded? It is not for me to outline the motivations or intentions of Verbi in their software development – they can, and have done that themselves. What I can do is reflect on how Generative AI tools are integrated into MAXQDA's existing suite of analysis features and comment on how their use can contribute to analytic practices including critical thinking, interpretation and reflexivity.


This I believe, has been done in methodologically sensitive ways that enable researchers to stay in control of analytic process and thus we can use MAXQDA's Generative AI tools to contribute to, rather than suppress our critical thinking, reflexivity and interpretation.

There are several ways in which this is evident, today I just have time to highlight two.

 

The first is one of MAXQDA’s AI assisted coding features, whereby researchers direct the coding process by initially creating and defining a code, and instructing the AI to find and code segments within one or more datafiles that match the definition. In this feature the code definition acts as the prompt for the AI.


Annotated screenshot of MAXQDA's human-driven GenAI coding
Annotated screenshot of MAXQDA's human-driven GenAI coding

Researchers have control in this example because it is us that decides the code name and definition and instructs the AI to do the coding.


More than that, when it comes up with the coding, our role in the process is again emphasised in a number of ways. For example, the AI coding is initially separated in the code system, not immediately integrated into existing coding we may have done around the concept. Secondly, there is an explanation for why each coded segment was deemed by the AI to match the code definition, clearly visible. Thirdly, the code is a specific colour – this blue-to purple graded colour which is specific to AI coding done in this way


As such the emphasis on how this functionality has been implemented encourages me to review and think about the coding that has resulted, rather than to just accept it without thinking. Of course, I can just accept it without looking or thinking, but that is my choice and my responsibility. The architectural design and implementation encourages me to think. I can reject codings if I do not agree with them, and I can adjust and add my own if I believe the AI has missed something of importance. I can also refine the code definition if I see it has misunderstood what I mean.


The process can be iteratively enacted, reflecting the needs of my analytic method

 

Another example is the AI chat functionality when used in relation to already coded data segments, rather than across one or more whole datafiles. This image is shown in black and white because it’s a figure from one of my forthcoming publications (Silver and Lewins, in press).


Conversing using natural language prompts and responses with already coded segments is an example of the integration of Generative AI capabilities with existing analytic techniques, rather than replacing techniques with new tools.


This gives me the flexibility to engage in theme-based conversing across subsets of a dataset, as well as case-based conversing within particular datafiles, where a datafile or collection of datafiles represents a unit of analysis.

 

More broadly, in terms of their potential role within the analysis cycle, much like the other non-AI tools within the program, there is not a one-to-one match between a Generative-AI tool and a phase of analysis or an analytic task that it can be useful for.


Rather, each tool may be harnessed for a variety of different tasks, depending on the needs of the analysis. Therefore, in some projects I can choose to use a particular Generative AI feature, let’s say AI chatting, early on, for example to help familiarise with data, or later on to help consider the coherence of a category, or the essence of a theme. It’s all driven by analytic method.


There are many other examples that I don’t have time to discuss right now, but the point is that the way these tools have been implemented is a great example of the methodological emphasis of human researchers maintaining control, oversight and responsibility for the process, rather than outsourcing the thinking to the tool.

 

Image Source: Yutong Liu & The Bigger Picture / https://betterimagesofai.org  / https://creativecommons.org/licenses/by/4.0 /
Image Source: Yutong Liu & The Bigger Picture / https://betterimagesofai.org  / https://creativecommons.org/licenses/by/4.0 /

I have no doubt MAXQDA’s developers have plans for enhancing the Generative AI features – indeed we saw a few of them earlier in Julia’s Gerson's sneak preview of furthcoming features. And who knows what broader developments in the wider AI space will prompt.


But I am confident, having seen how they have harnessed these capabilities thus far, that they will continue to exercise methodological sensitivity in future developments. If you choose to use MAXQDA’s Generative AI features I therefore feel you are in safe hands. I don’t believe these tools are designed to replace human interpretation, critical thinking or reflexivity, but are there so you can ask for options that you then consider, and in doing so, you may enhance your interpretations, your critical thinking and your reflexivity.


Either way it will be you that is doing this.


In this sense these tools are only tools. Their development and use have consequences, yes, and therefore they are not ‘just’ tools, but they are tools nevertheless, and we are the ones that choose to operate them.

 

So…we do not have to lose our qualitative minds by using Generative AI tools, but it does remain always our responsibility to be accountable for every stage of the analytic process, in order that we don’t.

 

Thanks for listening to me.  


If you're interested in more of my work, check out the other posts on this blog, my YouTube Playlist, forthcoming events, and the CAQDAS Networking Project QualAI pages.


 

References


Trena M. Paulus & Jessica Nina Lester (2023): Digital qualitative research workflows: a reflexivity framework for technological consequences, International Journal of Social Research Methodology, DOI: 10.1080/13645579.2023.2237359 

Christina Silver (in press) The Five-Level QDA Method in the Gen-AI Era: Rethinking Qualitative Pedagogy and Practice. In Friese S & Morgan D (eds.)  Qualitative Data Analysis with Artificial Intelligence: Theory, Methods and Practice. Sage Publications

Christina Silver & Ann Lewins (in press) Using Software in Qualitative Analysis: A Step-by-Step Guide. (3rd edition). Sage Publications

Nicholas Woolf & Christina Silver (2018) Qualitative Analysis Using MAXQDA: The Five-Level QDA Method. Routledge https://www.routledge.com/Qualitative-Analysis-Using-MAXQDA-The-Five-Level-QDA-Method/Woolf-Silver/p/book/9781138286191 

 
 
 

Comments


bottom of page