-
Notifications
You must be signed in to change notification settings - Fork 2
Reproducibility enhancing modules
On the Report page you will find several modules to increase researcher reflexivity and coding transparency. These modules only work in the web version of Requal, as it allows coding by multiple coders and comparison between them.
The first tab Summary provides an overview of the number of segments tagged by each code in each document. If we want to know how often and where a particular coder used the codes, we only select her in the Select users menu. Similarly, we can limit the selection to only certain codes or documents if there are many of them and the overall table is too large.
The 'Agreement' tab allows you to view the agreement between codes, coders and their attributes. Expanding the 'Select metrics' menu reveals two sets of controls. The first shows the agreement calculated based on the number of overlapping letters (characters), while the second shows the agreement based on the overlap of the marked segments, regardless of the number of characters in the segments.

The agreement is based on Jaccard's similarity coefficient, which is a ratio of the number of identically coded characters or segments to the total number of coded characters or segments, excluding overlaps.
We can illustrate the difference between agreement in characters and agreement in segments by an example:
- Coder A marks: But it's just me, the problem with me is that I don't drink beer, I don't like it (55 characters).
- Coder B marks: But just me, the problem with me is that I don't drink beer, I don't like it. I'm not an alcoholic. (71 characters)
The character agreement will be 0.77 according to the calculation 55/(55+71-55) = 0.77
The agreement in segments will be 1.00 according to the calculation 1/(1+1-1) = 1.00
- Coder A marks: But it's just me, the problem with me is that I don't drink beer, I don't like it (55 characters).
- Coder B marks: But it's just me, the problem with me is that I don't drink beer, (...) I'm not an alcoholic. (43 + 16 = 59 characters)
The agreement in characters will be 0.77 according to the calculation 43/(55+59-43) = 0.61
The match in segments will be 1.00 based on the calculation 1/(1+2-1) = 0.50
From these examples we can see that each criterion is sensitive to different coding qualities. While the number of common characters is more of a mechanical measure of agreement, the agreement in segments is more reflective of the meaning component of coding the data.
Calculates the total coding overlap in characters across all documents for the selected coders. This table provides an overview of the total overlap of coders in the project.
Calculates the coding agreement in characters across all documents for each code and for the selected coders. This table provides an overview of whether the overlap varies by code. It allows the identification of problematic codes.
Displays a heatmap that compares pairs of coders based on their character agreement across all documents and all codes. The lighter the color, the higher the agreement. This table allows identification of pairs of coders that are 'aligned' overall, and those that are least aligned.
Displays a heatmap for each code separately, comparing pairs of coders based on their agreement in characters across all documents. The lighter the color, the higher the agreement. This table allows you to see if the coder pair matching is similar across all codes, or if and how the matching differs for different codes.
It displays a heat map for the selected coder attribute, showing how individual coder groups agree on characters for each code across all documents. The lighter the colour, the higher the level of agreement. Thus, the table can reveal codes that are sensitive to any of the coder attributes. Looking at the 'failure' code in the diagrams below, for example, we can see that there is more agreement between men and women (0.32) than between men (0 – no overlap). Moreover, we can see that women tend to agree more on this topic (0.6), suggesting that gender is an important factor in coding failure.
The same set of indicators exists for agreement in segments. Their interpretation is the same, with the only difference that matching is not counted in identically coded characters, but in overlapping segments.
If we want to examine the coding agreement in more detail, we use the Text overlap tab. In the menu, we select which document we want to display, which code, and possibly certain coders, if we want information about, for example, a two-coder agreement.
Clicking the Browse button will show us the coders overlaps for the given code and document. The lighter the color, the higher the agreement between the coders.
When we hover the mouse over a highlighted segment, the names of the coders who coded it are displayed. This provides a good basis for a team discussion about coding. For example, about how long segments to code or how distinct instances of a phenomenon to code.

Tip!: This module can also be used to allow users with permissions to view the report to see how individual coders coded, e.g. in a class project.
The application development has been supported by The Technology Agency of the Czech Republic, project n. TL05000054.