Bazeley and Jackson (2013) Qualitative Data Analysis With NVIVO.
Citation: Bazeley, Pat and Jackson, Kristi (2013) Qualitative Data Analysis With NVIVO, Second Edition, London: SAGE Publications Ltd.
Time Period Covered:
Theory, Research Question, Hypothesis:
Relationship to Other Research/Ideas Contested/Noted Gaps:
Concepts and Definitions:
Method:
Primary/Original Data:
Argument/Conclusion:
Limitations/Flaws:
Abstract:
Notes:
Bazeley and Jackson (2013:2): NVIVO provides a means of managing data and the ideas that emerge from it while maintaining a link to the original source material.
Bazeley and Jackson (2013:3): “There is a widely held perception that use of a computer helps to ensure rigour in the analysis process. In so far as computer software will find and include in a query procedure, for example, every recorded use of a term or every coded instance of a concept, it ensures a more complete set of data for interpretation than might occur when working manually. There are procedures that can be used, too, to check for completeness, and the use of computers makes it possible to test for negative cases (where concepts are not related). Perhaps using a computer simply ensures that the user is working more methodically, more thoroughly, more attentively.” At the same time, it is still incumbent on the researcher to be rigorous in their interpretations and to generate ideas.
Bazeley and Jackson (2013:4): Qualitative data analysis software was developed with the aim of facilitating data management and promoting rigour.
Bazeley and Jackson (2013:6): “Users of NVivo’s tools can face opposition from those who express doubts about using software for analysis of qualitative data, or who simply have an aversion to technological solutions.” [Though it could be argued this is quite literally a dying perspective, since early career researchers are less likely to express aversion to utilising technology and typically have less access to resources that allow them to outsource data processing and analysis. Also, some of the discussion of the merits of using technology in textbooks and articles from the 1990s and early 2000s cannot but strike younger researchers as somewhat quaint! (need to avoid ageism, acknowledge that older researchers can be just as, if not, more, technologically adept than younger ones. Instead, it is a problem of familiarity, not age). Ironically, there is something of a failure of self-reflection in some of these criticisms, with those levelling them failing to recognize that the problems they allude to stem more from their own perceptions of and inadequate familiarity with the technology, rather than being inherent to the technology itself.]
Bazeley and Jackson (2013:7): “Concerns about the impact of computerization on qualitative analysis have most commonly focused around four issues: • the concern that computers can distance researchers from their data; • the dominance of code-and-retrieve methods to the exclusion of other analytic activities; • the fear that use of a compute will mechanize analysis, making it more akin to quantitative or ‘positivist’ approaches; and • the misperception that computers support only grounded theory methodology, or worse, create their own approach to analysis.” [The first objection carries echoes of someone being in a position to outsource collection and coding of data, since a researcher will invariably become intimately familiar with their data during both processes. The second and third objections reflect more of a concern that researchers will cease to be faithful to their ontological and epistemological positions, but are therefore problems with research design and implementation, rather than with computers per se; and the third and fourth objections reflect a failure to understand the multiple ways in which computers can be used and something of a fear of technology as being able to act independent of the user.] The authors note that the first objection also identifies issues such as poor display and textual segmentation and loss of context — the first not being a problem with modern technology, the latter not being a problem with NVivo, where the original material is always retrievable [Indeed, one could argue that this loss of context is equally problematic in some of the old-fashioned methods of literally cutting and pasting printed documents. The problem can also be mitigated by good data management practices]
Bazeley and Jackson (2013:10): “The oversimplification of qualitative methods has occurred and continues to occur whether software is involved or not. […] Researchers must integrate their chosen perspective and conceptual framework into their choices regarding what tools they use, what and how they might code, and what questions to ask of the data. This is the role of the researcher whether or not they use software.”
Bazeley and Jackson (2013:30): Argue it is typical for researchers to keep a journal, which creates “an audit trail” for the project. [The claim that, without a journal, “it may be difficult to pull together the evidence you need to support your conclusions” is exaggerated — it depends how NVivo fits into your broader research and note-taking practices, i.e. whether you are working exclusively in NVivo or using it as one of a range of tools.]
[coding schema changed slightly to reflect software. For example, while the following coding schema would make logical sense: — Local authorities — Criticism — Praise — Russian authorities — Criticism — Praise It is regarded as poor coding practice in NVivo to duplicate codes. Therefore the coding would better be rendered as: — Local authorities — Russian authorities — Criticism — Praise Thus, in my coding schema, actors are rendered as a separate category to audience, because the same actors may also be designated as targets or allies. It is the combination of the actor and audience that reveals the motivational framing.]