Integrating geographical data

As part of the qualitative innovations in Computer Assisted Qualitative Data Analysis (QUIC) project we are looking at ways in which CAQDAS packages enable geographical data to be integrated with other forms of qualitative data.

This page provides support materials arising from our work concerning the use of selected CAQDAS packages to analyse audio and transcript data generated from mobile interviews in relation to corresponding location data collected through GPS technology. Additional material, such as photographs and sketch maps, may also be linked to and used in the analysis.

Diagram

About the project

A preliminary review of the procedures and screen displays when geo-linking qualitative data in CAQDAS programs to images of places in Google Earth suggested that this technology might yield fruitful analysis results when the topic of the research is strongly connected to ideas about "place". So a pilot study was designed to examine the thoughts of a sample group of residents drawn from a selected area about their perceived boundaries to that area and their opinions of that area as a place to live. It was hoped that these topics would prove useful for exploring a linear phenomenon, such as a border, as well as studying a point phenomenon, such as a specific location, both whilst using this combination of software tools.

Selection of the study area

The first stage of the study was to select an area in which to undertake this work. In part this was informed by reviews of indices of multiple deprivation at the most detailed local area level available, based on the 2001 Census, with the aim of selecting an area in which some of the respondents might have issues of concern. A wholly prosperous area might not be expected to give rise to much variation in attitudes towards it, so areas with quite mixed categories were chosen. It was also thought to be desirable that the area selected should not have strong physical boundaries (such as major roads, railways or rivers with few crossing points) because these would probably exert strong influences on perceptions of those boundaries.

When a shortlist of three potential study areas had been drawn up, one member of the research project team visited all three by car, conducting a drive through review, and followed this up with a walk around the area that appeared to be most suitable. This confirmed its suitability, on the basis of an apparent mixture of housing types and ages, an absence of obvious physical boundaries, and the presence of shopping, offices and light industrial premises. So the decision to select that area was made.

Data collection procedures

Contact was made with the local Police team to inform them of our interest and likely activity over the coming months. They agreed to introduce a researcher to some residents at a local Police Neighbourhood Panel meeting. Several participants at that meeting agreed to take part in the research, and these in turn led to introductions to other possible respondents in a ‘snowball’ sampling method (Sturgis, 2008).

One main form of data collection was the mobile interview. The researcher met a respondent at an agreed location, often their home, and the two of them then went on a walk around the neighbourhood with the respondent choosing the route. The conversation was recorded on a single digital audio recorder (Olympus DS-40) by attaching a lapel microphone to the respondent’s coat, using a broad pick-up setting, and the recorder was carried in their pocket. This worked well most of the time, the researcher had to remember to stay close to the respondent whenever he wanted to speak or ask a question, but small losses of data when the researcher’s voice was difficult to hear were regarded as insignificant. The researcher carried a hand-held GPS (Garmin E-Trex) which recorded the interview track, and on which the researcher marked "waypoints" from time to time when the interview seemed to discuss significant locations. These waypoints subsequently became key location data in the analysis. The researcher also carried a digital camera to photograph significant scenes, however this was found to be of less use as taking photographs was quite disruptive of the interview flow.

A mobile interview was conducted with one of the neighbourhood PCSOs as the first interview, and this helped the researcher to get familiar with the street layout in the chosen area.

A second form of data collection was done by way of "environmental audits" or scans of the area. Two such audits were conducted, after most of the mobile interviews with residents had taken place. On each occasion two researchers went together, walking around the study area, using a pre-planned route, and systematically attempting to observe signs of disorder or incivility. These audit walks were recorded, like the mobile interviews, on the same audio recorder to collect the researchers’ conversation, and the route was logged by GPS. The data about incivilities was logged on a handheld PDA (Sony Ericsson Xperia X1) using "SurveyToGo" and "Garmin Mobile XT" software, creating a dataset categorising the items observed, the time and the precise location.

A further data element was created when the respondents agreed to draw a sketch map of their neighbourhood before starting the mobile interview. It was intended that this would be compared with the narrative from the interview regarding the boundaries aspect of the project. This also gave the respondents an opportunity to plan their route for the walk.

Outline of exploratory data analysis methods

The audio recordings were transcribed using "F4" transcription software. This program was used because it is recommended by, and integrates well, with the ATLAS.ti and MAXqda CAQDAS programs. After transcription, both the audio file and transcript file were assigned to the CAQDAS program enabling synchronisation within the CAQDAS system. Thematic coding was carried out with a focus on the main themes of the project, boundaries and place, with further interest in fear of crime in relation to those themes.

For simplicity, the waypoint co-ordinates were downloaded from the GPS receiver and copied into Microsoft Excel spreadsheets. From there they could be pasted, one at a time, into Google Earth in order to create unique geo-links with the transcript data. Each of the three CAQDAS programs used (ATLAS.ti, MAXqda and NVivo) has a different geo-linking system and these were tested separately. (It is also possible to download the track data as a set of points and move that into Google Earth as a unit.)

The data collected on the audit walks was downloaded from where it was stored on the "SurveyToGo" website and put into IBM-SPSS for potential quantitative analysis. The latitude and longitude co-ordinates were calculated using the time stamp and track log data, and these could also be moved into the CAQDAS programs in similar ways to the waypoints from the mobile interviews. Photographs taken during the interviews and audit walks were assigned to or imported into the CAQDAS programs, and also linked to the IBM-SPSS dataset.

After these data preparation procedures had been completed, the objective was to explore how the technology might enhance a qualitative analysis around the themes of place. It was observed that, in CAQDAS, the geo-links currently effectively work in one direction only – they make it easy to switch from reading transcript text about a place to viewing Google Earth images of that place (both satellite views and street level views). However, at present, rather more work is required by the analyst who wants to start with the Google Earth images and switch from there to reading what respondents have said about a particular place. More details about these observations may be found in the guidance instructions for each CAQDAS program (ATLAS.ti, MAXqda, NVivo).

It was also observed that it is easier to listen to the audio recordings, than to read the equivalent passages in the transcripts of the conversations, whilst looking at the Google Earth images, if insights are to be drawn from the juxtaposition of these sources, because it is hard to look at two different things simultaneously. So the facilities to synchronise audio files with transcript text in CAQDAS programs may be just as significant as the facilities to create geo-links in CAQDAS.

Mobile interviewing

As explained in the outline description of the exemplar project, an important part of the data collection process was a series of "mobile interviews" conducted in the area of the study.

These notes are a series of personal observations about how this particular research technique worked in the real world. They are included here for the benefit of other researchers who may be considering trying something similar in their work.

Technical aspects – equipment

The audio recording of the interviews was done using an Olympus DS-40 digital voice recorder. The stereo microphone was detached and a mono lapel microphone was connected in its place. The microphone was clipped to the lapel of the respondent and, once the recording function had been initiated and locked, the recorder was placed in the respondent’s pocket where it was unobtrusive. The recording mode was set to a mid-range quality ("HQ mode"), and the microphone sensitivity was adjusted to its middle setting ("conf") and the low cut filter was set to "on". The wind protection cover for the microphone was lost during the second interview but this did not seem to make any significant difference to sound quality in later use. These settings proved to be quite adequate for recording the speech of both the respondent and the researcher out in the street, provided both stayed within a few feet of each other. The only situation that caused trouble for transcribing the speech was when particularly noisy traffic passed close by with house walls reflecting the sound. If a higher quality recording mode is used the audio files will be very large indeed (our settings generated files of about 12MB per hour of speech).

The only problem that we did encounter with the audio recording was interference from other equipment. In the interview with a PCSO the police radio was also attached to the respondent’s clothing and so the audio recording unfortunately picked up all of the messages broadcast to that radio during the interview walk. With hindsight we realised that it might have been better for the researcher to carry the recorder and microphone for that interview. On our first audit walk we also discovered that the PDA/smart-phone was continuously communicating with a server on the internet and its radio signals, which were almost inaudible to us at the time, were recorded as an unwanted electronic buzz in the digital audio.

The GPS tracks were logged on a Garmin eTrex Vista C receiver. This was carried by the researcher. The GPS has to be kept in the open so that it can remain in contact with the satellites from which it triangulates its position, so users could have some issues about this drawing attention to the participants in an urban neighbourhood. However, it was found that holding it in the hand made it seem very like a mobile phone, and using it to mark waypoints probably looked very much like texting to passers-by. In the urban environment this instrument did occasionally lose the satellite signal when close to tall buildings or trees (interestingly, the ‘beep’ signal it made to report loss of signal was sometimes audible on the sound recording although not noticed by the researcher at the time).

The most obtrusive equipment used in these interviews turned out to be the digital camera. Naturally this was carried out of sight most of the time as, away from ‘tourist’ areas, cameras are not often seen around our streets. However this resulted in a much more noticeable effort being required to get the camera out to take a photograph, and as this seemed to interrupt the flow of the conversation, it was used much less than had been expected beforehand. On the audit walks, photographs were taken with the PDA/smart-phone’s camera, but the process of uploading these to web storage with the other data reduced the quality of these and made them less useful in the analysis.

Synchronisation of data types

It was important to be able to identify accurate locations from the GPS data for the other types of data collected, so some attention was paid to synchronising these by time. The GPS uses time signals from the satellites which are extremely accurate, but are not available to the audio recording equipment. We used a simple procedure to create synchronisation points in the data by speaking a comment into the recorded audio at various stages of the interview. These comments could then be used to extrapolate other time specific events onto the GPS timeline and thus trace their accurate locations. In particular the researcher would say something like "that’s point 85" when a waypoint marker was set on the GPS (reading the point number off the GPS screen) because the GPS logs the exact time for each waypoint created. During the audit walks, when a photograph was taken on the PDA/smart-phone its shutter sound-effect could be heard on the audio recording, and as those photographs were also logged for time and location in the "SurveyToGo" database they provided even more accurate synchronisation points with the other GPS data.

The audio recordings do not have a built-in absolute timestamp, but when played-back they do show accurate elapsed time since the start of the recording. To synchronise the audio data with location data it is necessary to identify two or three incidents (such as the waypoint markers or photographs mentioned above) on the recording. From these a calculation can be made to identify the absolute time at which the recording started, and then this can be used to calculate the absolute time from the elapsed time at any other point in the recording.

As the GPS recorded a track for each walk, with accurate location details every few seconds, the location of any event in the data could be interpolated to within a few yards with the absolute time value and the two nearest track log points. In practice, this was rarely necessary as other data associated with the event generally made its approximate location obvious. However, if location data are to be collected without the presence of a researcher, say when track logging equipment is given to respondents going about their normal lives, then care may be needed to create appropriate synchronisation points for other relevant data.

Pitfalls in the real world

On the face of it, the idea of recording the conversation and GPS track of two people walking in the street seems pretty straightforward, what could possibly go wrong? The answer is actually that not a lot will go wrong usually, but unexpected complications may arise when the meeting is being conducted for research purposes with potential ethical problems.

The most frequent difficulty that we encountered arose when our respondent met someone that they knew during the interview and engaged them in conversation. Whenever this happened the audio recorder inevitably picked up the voice of the third party as well, so that effectively we collected data from them without their consent. This would be much less likely to happen in the controlled environment of a static interview (unless that was also done in a public place, or in circumstances where the respondent was liable to be interrupted). It is quite straightforward to ignore the unwanted conversation whilst transcribing, or to delete it from the transcript if it has actually been transcribed, but it is not so easy to remove it from the audio recording. Since we found the digital audio recordings to be a useful data source in the analysis, this represents more of a problem. Whenever such meetings occurred our respondents never excused themselves to cut short the conversation, probably not wanting to have to explain the interview situation, and the researcher never intervened either as this would have appeared very rude. So there seemed to be powerful social forces compelling the chance meetings to be played-out as naturally as possible. All that can be done is to develop a strategy for deleting the unwanted data.

Another ethical problem that is easier to anticipate is that care needs to be taken to start and stop the GPS track log some way away from the respondent’s home. As we were asking people to show us around their neighbourhood, we mostly agreed to meet at the respondent’s house, and it was tempting to get the GPS setup before knocking at the door. But doing this could mean that the exact location of that house is included in the data collected. There is not much point in anonymising someone’s name if you are going to identify where they live to within a few feet! So it was necessary to pause the interview shortly after setting off on the walk in order to activate the GPS, and then also to remember to pause again shortly before returning to their home in order to save the track log and turn the GPS off. This may not be as easy to do in practice as might be expected, because at both of these stages the respondents often seemed to be in full flow of talking.

When conducting an interview whilst walking in a public place, it will be found difficult to use an interview guide effectively. The interviewer is likely to become very self-conscious if they try to read their notes for a question or a prompt under these circumstances, as this is not natural behaviour in the street. This should be less of a problem generally because the walking interview probably works best when there is the least amount of structure. For our pilot project we wanted the respondents to take as much control as possible, to select the route and choose themselves how they spoke about the places along it. So we only used a few memorised prompts if the conversation dried up completely. We did observe that it is much easier to tolerate a long silence during an interview whilst walking along than it would be if such a silence occurred during a static interview. The hope was that the conversation would be much more prompted by the places encountered than predetermined research oriented questions.

This approach was largely inspired by two particular articles that described experiments with walking interviews: Hall, Lashua & Coffey (2008) and Brown and Durrheim (2009). For some researchers, mobility itself has become the topic (Fincham, McGuinness and Murray, 2010) whereas in our study mobility was a means of acquiring data about very specific locations in a cartesian space. We walked, rather than drove, in order to explore a neighbourhood without the constraints of traffic regulations. We walked because that made it easier for the respondent to decide the route. We walked so that both interviewer and respondent could see and hear the same stimuli related to the places visited and incorporate these into the conversation. At times it was challenging but, even in a small pilot study, it was also exciting and enriching.

Linking to Google Earth

This section is a series of observations about how each CAQDAS package facilitates the linking of qualitative data that has a geospatial dimension to images of the location available in Google Earth. The materials use examples from a pilot study which involved conducting interviews with people connected to a selected area as they walked around that area and discussed it, and whilst a GPS device recorded the precise track followed and the waypoint locations of some significant places that were talked about. More details about the data collection methods can be found in the exemplar project description, and mobile interviews.

ATLAS.ti integrates closely with Google Earth for this type of analysis, at times showing the satellite images within the ATLAS.ti analysis frame and also injecting an ATLAS.ti symbol into Google Earth as a place marker. Where the other CAQDAS programs reviewed here provide mechanisms for activating links to the Google Earth program, ATLAS.ti actually stores the place details as quotations in parallel with other key data within a project file. It is sometimes useful to operate Google Earth outside ATLAS.ti as well, so that other data in a project can be considered in a spatial context as well.

1. Storing locations in Google Earth

The first stage of this process is carried out in Google Earth, so begin by opening that program. Please note that this is not the same thing as "Maps" in the mainstream Google Search or web page, but is a separate program that can be downloaded from the internet. Observe on the left-hand side of the Google Earth screen that there are three sections of navigation and display controls called "search", "places", and "layers" respectively.

It is likely that your locations will be identified in one of two possible ways; either you will have collected exact point parameters with a GPS instrument, or you intend to select points from within Google Earth by examining images and manipulating the screen display. Both of these methods are described below.

There is also a choice with Google Earth as to whether to store location details within that program, in the "my places" section as well as in your CAQDAS program, or only to store the location details in the CAQDAS program. Reference to this choice is also made below where it becomes applicable.

Taking locations from a GPS into Google Earth

There are several ways to get point locations from a GPS into Google Earth, it would be tedious to describe them all so we suggest a fairly general purpose method that is in no way specific to any particular equipment or software. In the interests of accuracy, you should attempt to avoid retyping latitude and longitude co-ordinates if possible and use copy and paste methods when moving data from one system to another. For this operation we suggest that there are good reasons for working with each point separately, mainly because we envisage that the main analysis working in a CAQDAS package will involve examining a single location at any one time.

In our example project we downloaded the GPS data file into its proprietary software on the analysis computer, then copied and pasted the specific waypoint data into a spreadsheet. Typically this data includes details such as date and time, altitude, and fields for descriptive data in addition to the co-ordinates that locate the point in the landscape. From the spreadsheet it was possible to copy the co-ordinates for one waypoint, paste them into the "fly to" field in the Google Earth Search section, and then click on the search button so that the image centred on the specified point.

It is possible, though is not essential, to store locations in the "places" section in Google Earth and to create a folder system to organise such places in appropriate groups. We created a folder for each interview and stored all the locations marked for that interview in there. These locations will be loaded each time you open Google Earth and so can be accessed independently of any other programs. To create new folders in "my places", right-click in that section of the screen and select the "add / folder" menu option. To store the current location in "my places", move the mouse pointer over the lower part of the search section (where your latest search co-ordinates are visible) and right-click, the context menu that appears should include the option to "save to my places". Items can be moved around within the "my places" section using standard drag and drop procedures, and they can be renamed using the right-click context menu.

Creating locations within Google Earth (without using a GPS)

This set of guidance instructions is applicable when you want to start the locating process in Google Earth. For example we are aware of a research project that collects its data by examining images in Google Earth Street View, looking for signs of a particular phenomenon, and marking the points where such signs have been observed.

In Google Earth use the navigation controls to move the image until the point you wish to mark is in the centre of the window. For fine tuning this position, zoom in closer and observe where the circular zoom focus marker appears as you do so (this indicates the centre of the display), also as the image comes closer to the ground the error of location diminishes, then zoom back out again until the view appears satisfactory. Either, use the menu option add / placemark, or the map pin icon on the toolbar (second from the left) to open the "New placemark" dialog.

If you have arranged the display so that it is correctly centred on the desired point, all that is essential in the dialog is a name label. However there are four tabbed parts to the dialog where you can store further details which will affect the way the point displays when you next fly to it. On the "style, color" tab you can alter the way that the symbol and label will be displayed for this place in the future. On the "view" tab you can set the initial height, heading and tilt parameters for this place; these are normally adjusted with the navigation controls as you look at a place from different angles. On the "altitude" tab you can adjust the way the marker symbol links to the exact point, it doesn’t have to remain partly obscuring the detail but can be shown like a balloon tethered to the point by a fine string. Finally, a click on the "OK" button saves the location in "my places". Further editing of the display settings for any stored place can be done by right-clicking on that point in "my places" and selecting the "properties" option from the context menu.

2. Using Google Earth primary documents in ATLAS.ti

In order to use Google Earth and ATLAS.ti together it is necessary to open Google Earth from within ATLAS.ti, so close Google Earth (if it has been opened for the operations described above) and then open ATLAS.ti and the appropriate hermeneutic unit (HU).

The process is started by creating a Google Earth primary document and the option to open a "new Google Earth PD" can be found in the "assign" pull-down menu on the main toolbar, as shown in Figure 1. It can also be found in a similar sub-menu via "documents / assign" from the options in the main screen (also showing in Figure 1) or the primary documents manager.

Figure 1: Opening Google Earth from within ATLAS.ti

Opening Google Earth from within ATLAS.ti

This option will create a new primary document in the HU as a container for a set of location quotations. The new document will be added to the list in the primary documents manager with the name "Google Earth", the next document number in the series there, and a Google Earth icon. It is a good idea to rename this document with a label that will distinguish its contents from other sets of locations, a right click on its name in the primary documents manager provides an option for this. In our example we have used "Google Earth Aud 1".

Open the Google Earth PD in the same way as any other primary document and the Google Earth program will open inside the main analysis window (see Figure 2 below). (A warning message is displayed, because the process is not instantaneous and also to provide an escape for those occasions when you accidentally click on the wrong document.)

Figure 2: Opening a new Google Earth PD in ATLAS.ti

Opening a new Google Earth PD in ATLAS.ti

Note in Figure 2, above, how the entire Google Earth window, including its menu options, are visible within the ATLAS.ti frame. If you have saved any locations in "my places" they will be available to use from that section. At the foot of the "places" section, beneath the folders with previously saved places, in "temporary places" you will find a new folder structure with your HU name and the name of the document you have just opened, this has been ‘injected’ into the Google Earth program during the process of opening the PD.

The next phase of work takes place within this framework. In summary you ‘fly’ to each point location in turn and create an ATLAS.ti quotation for it, until you have completed all of the point locations for the current primary document. If you have already saved these locations in "my places" in Google Earth you can use those to display each place, or you can use either of the navigation procedures outlined in section 1 above to copy in GPS co-ordinates or manipulate the display yourself to centre a point in the window. In this illustration we pasted in latitude and longitude co-ordinates from a GPS receiver.

At this stage it is a good idea to open the quotation manger window in ATLAS.ti and position it where you can see it beside the main window, so that as new quotations are created they can be renamed appropriately. We suggest that having it to the left of the main window works well, where it can be seen close to the "places" section in the Google Earth panel.

Figure 3, below, shows a portion of the ATLAS.ti / Google Earth window. We have pasted in a set of co-ordinates and clicked on the search button in the Google Earth part of the window to fly to that location. Then, without altering the Google Earth display in any way, immediately click on the "create a new quotation" icon in the vertical toolbar (the quotation marks button, highlighted in Figure 3).

Figure 3: Making a new place quotation in ATLAS.ti

Making a new place quotation in ATLAS.ti

When you click on the quotation button in this way, a new quotation will be added to the quotation manager. It takes the document number of the current Google Earth document and the next sequential number within that document (3:1 in our example, as it is the first quotation in PD number 3) and is initially given the document name as the quotation  name. It will be advisable to rename it within the quotation manager window to something that identifies the specific location or its particular significance.

Tip

If you use any of the Google Earth pan or zoom controls between flying to a GPS recorded location and creating the ATLAS.ti quotation you will probably introduce inaccuracy. The ATLAS.ti quotation will be the exact centre of the Google Earth display at the moment it is created, and that can easily change when you pan and zoom the image. That is why the notes above advise you to create the quotation immediately after flying to a precisely recorded location.

When you have created and renamed a place quotation you can move on to repeat the process: create another by pasting in the next set of co-ordinates, ‘fly’ to that point, and click the create quotation button. All of the quotations created in one session in this way will be included in the same primary document in your HU.

If you want to create a different PD with another set of points you should close Google Earth, save the ATLAS.ti HU, and then create another new Google Earth PD from the "assign" menu as before. (We have found that trying to create a new PD without closing Google Earth is not successful, because any further quotations created will still be attributed to the earlier PD).

The effect of grouping several point quotations together in a single PD becomes apparent when you open such a PD. Figure 4, below, shows a screenshot after P3 (with 4 quotations) has been reopened. The four points have been added to the temporary places folder in Google Earth and all of them are displayed in the live Google Earth window with the ATLAS.ti symbol as a ‘balloon’ type marker, and the quotation names from the ATLAS.ti HU displayed as well. The quotation data has been injected into Google Earth during the process of opening the PD.

Figure 4: Opening an existing Google Earth PD in ATLAS.ti

Opening an existing Google Earth PD in ATLAS.ti

Image: © 2011 Infoterra Ltd. and Bluesky.

It is safe to use the pan, tilt, rotate, and zoom controls in Google Earth when you are not in the process of creating ATLAS.ti quotations. So after opening the PD in this way, if you use the tilt and zoom controls in particular you will observe that the balloon markers are ‘tethered’ to the ground by fine strings so that the precise locations can be viewed without obstruction by the markers.

Using various combinations of the guidance provided above it is possible to store location details in either ATLAS.ti or Google Earth, or in both programs, and those locations may have been identified with the help of a GPS receiver or manually in Google Earth.

3. Using Google Earth Quotations in ATLAS.ti

Because the location details are stored within ATLAS.ti as quotations, they can be used like any other quotation. For example one or more thematic codes can be applied to each place quotation, if that is useful in your analysis. However, it is also possible to use hyperlinks to relate each place quotation to one or more passages in the text transcripts, or to photographs taken during the data collection process. When these hyperlinks are activated it is possible to move rapidly around the data reading texts, examining images, and opening Google Earth in order to search for richer insights in the data.

In Figure 5, below, a passage of text in a transcript document has been highlighted and marked as a quotation (identified in the quotation manager as "9:38"). Then, using a specially created relationship, "displays in GE", that quotation has been linked to quotation number "29:2", which is a Google Earth quotations created earlier. This is shown by the blue coloured label in the coding margin and the "<" symbol in by the quotation number in the quotation manager. When the label in the code margin is double-clicked the link is activated, Google Earth opens in a separate window and ‘flies’ to waypoint 82. The images of that location can then be examined in Google Earth whilst the transcript of the conversation can be read in ATLAS.ti at the same time.

Figure 5:  Linking transcript text to a Google Earth quotation

Linking transcript text to a Google Earth quotation

 

Tip

We suggest that you put ATLAS.ti on the left and Google Earth on the right of your screen so that the Google Earth navigation controls (zoom, tilt, pan and rotate) are always visible.

The process necessary to create a new linking relationship as used above is as follows. From the quotation manager, use the option miscellaneous / relation manager (or from the main screen use quotations / miscellaneous / relation manager) to open the dialog box shown in Figure 6, below. When this first opens there is no relationship selected and most of the fields are blank, so a new relationship can be created straight away. If you have clicked on an existing relationship and then want to create a new one, use the option edit / new relation within this dialog.

Figure 6:  Creating the Google Earth hyperlink relationship

Creating the Google Earth hyperlink relationship

In Figure 6, above, we have illustrated the settings we used for this particular link, with the ID "GEDISP". What you type in the field "menu text:" is the label that will appear in the coding margin area when the relationship is used. The comment is optional and may be superfluous for this application. It is necessary to click the "apply" button in order to complete the creation or editing of a relationship; that button only becomes active when a change is made.

Tip

There are no apparent limits to the number of times one quotation can be linked or related to other quotations. So, one passage of transcript may be linked to the Google Earth waypoint, to a photograph stored in another primary document, and to other text quotations in the same or other documents. Similarly, several text quotations can be linked to a single Google Earth waypoint quotation so that any of these can open Google Earth and ‘fly’ to that one location.

One way of using the linked quotations to read your data is by using a network. Open a new network (option networks / new network view or click on the first icon in the main toolbar) and name it, then drag a Google Earth quotation from the quotation manager into that network window. Working in the network view, right-click on the Google Earth quotation and select "import neighbors" from the context menu. This will cause all of the other quotations that have been linked to that particular location to appear in the network. If there are several of these you will need to drag them apart in the network diagram so that they can all be seen clearly. You may have to adjust the network view settings to display more or less of the quotation verbosity (depending on the number of them and their length), and you can view each in its full context (with a context menu option). Once this has been set up, you can have Google Earth open in a separate window and explore various images of the area around the waypoint whilst reading various comments about it in the ATLAS.ti network.

4. Using Google Earth snapshot images in ATLAS.ti

In ATLAS.ti it is possible to create a new primary document which is a static Google Earth image, like a photograph, and use that as data. This is called a Google Earth snapshot. In such a snapshot document you can create small quotations boxes and link these to text quotations or other photographic images with relationship links. In this way it becomes possible to examine an area as a series of locations with a range of comments made by respondents about those locations. This is a kind of work around helping you to think about an area in the light of those comments made about it, rather than having to start with a comment and then ‘fly’ to the location that it discusses.

To create a snapshot document, first open Google Earth within ATLAS.ti. One way of doing this is to open a Google Earth primary document which has one or more quotation points in the area that you wish to use. Make sure that Google Earth is appearing inside the ATLAS.ti frame (as illustrated in Figure 2, above). This time it is important that you do use the Google Earth navigation buttons to rotate, pan, tilt and zoom until the display shows exactly the view that you want to store. Then, use the assign / new GE Snapshot PD option (also visible in Figure 1, above) to create a new primary document that is a static version of that display image. Once again it will be necessary to rename this document in the primary document manager.

Tip

Before starting to make quotations in the snapshot image, create the appropriate relationship types that you will need to use. We created two additional relationships "discusses" and "photographed". Note that the same label will be used for both ends of the link and what makes sense in the snapshot margin may be less meaningful beside a transcript text.

Figure 7, below, shows an illustration of this technique. A snapshot of a part of the study area has been stored. Eleven separate locations have been marked and made into quotations, these appear as white edged rectangles in the image. These quotations have then been linked to other data in the HU using the relationships "discusses" and "photographed".

Figure 7: Linking parts of a Google Earth Snapshot to other data in ATLAS.ti

Linking parts of a Google Earth snapshot to other data in ATLAS.ti

Image: © 2011 Infoterra Ltd. and Bluesky.

As shown in Figure 7, above, when a quotation bracket is clicked in the code margin then that location rectangle is shaded with a pink colour in the image. The bracket always matches the rectangle on the vertical axis, so it is level with it and appears with the same height in the display. If the cursor is hovered over the relationship link (the blue label in the margin) then the name of the quotation at the other end of the link is displayed in a pop-up box. Here the first part of the transcript at quotation 11:40 can be seen in the pop-up box. A double-click on the link enlarges the pop-up box to display the full quotation at the other end of that link (if it is a text quotation), with the option to view that quotation in the full context of its primary document with a further click. The same technique can then be used to return to this document via that relationship link.

In this way it may be possible to identify patterns in the transcripts of multiple interviews that are connected by geographical proximity or some other features apparent in the Google Earth image.

Tip

Note that this suggested technique requires quite a lot of preliminary work by the analyst to mark all the quotations and create the links. In practice it may be more useful for presentation purposes than for discovery.

Another use for a Google Earth snapshot document is to keep a copy of an image that is important in the analysis. This might be a specific combination of the Google Earth navigation controls that reveals something that is not so apparent with the standard view, or it might be a historical image which reveals how a place has changed in recent years. In Google Earth it is possible to display a time line showing several different images from earlier years for the current location, and it may be useful to store one or more of these as a convenient record of such changes.

Summary

These features that link qualitative data in ATLAS.ti to images of specific locations in Google Earth have only recently been developed. It seems likely that more advanced functions will be added in the future, so the examples above are probably just the beginning of the possibilities for these sorts of technological linkages. However these facilities will bring challenges for analytical imagination and rigour. What is exciting right now is that you can look quite closely at any place in the country without moving from your desk, and that you can do so in the context of other material connected to a particular place. The guidance above is intended to help researchers to set up their data in ways that maximise the potential advantages of such geographical juxtapositions.

The procedure involves storing key locations in Google Earth, saving these in special files accessible to a MAXqda project, and then creating hyperlinks within the MAXqda texts which will open Google Earth and ‘fly to’ the specified location whenever such a link is activated. The two programs (MAXqda and Google Earth) can then be operated side by side, screen space permitting, so that the images of the location can be examined whilst the related text is being read.

1. Store locations in Google Earth and create KML files

The first stage of this process is carried out in Google Earth, so begin by opening that program. Please note that this is not the same thing as "Maps" in the mainstream Google Search or web page, but is a separate program that can be downloaded from the internet. Observe on the left-hand side of the Google Earth screen that there are three sections of navigation and display controls called "search", "places", and "layers" respectively.

It is likely that your locations will be identified in one of two possible ways; either you will have collected exact point parameters with a GPS instrument, or you intend to select points from within Google Earth by examining images and manipulating the screen display. Both of these methods are described below.

There is also a choice with Google Earth as to whether to store location details within that program, in the "my places" section as well as in your CAQDAS program, or only to store the location details in the CAQDAS program. Reference to this choice is also made below where it becomes applicable.

Taking locations from a GPS

There are several ways to get point locations from a GPS into Google Earth, it would be tedious to describe them all so we suggest a fairly general purpose method that is in no way specific to any particular equipment or software. In the interests of accuracy, you should attempt to avoid retyping latitude and longitude co-ordinates if possible and use copy and paste methods when moving data from one system to another. For this operation we suggest that there are good reasons for working with each point separately, mainly because we envisage that the main analysis working in a CAQDAS package will involve examining a single location at any one time.

In our example project we downloaded the GPS data file into its proprietary software on the analysis computer, then copied and pasted the specific waypoint data into a spreadsheet. Typically this data includes details such as date and time, altitude, and fields for descriptive data in addition to the co-ordinates that locate the point in the landscape. From the spreadsheet it was possible to copy the co-ordinates for one waypoint, paste them into the "fly to" field in the Google Earth Search section, and then click on the search button to adjust the image so that it centred on the specified point.

To use the waypoint with MAXqda it is necessary to create a KML file. If you move the mouse pointer over the lower part of the Search section (where your latest search co-ordinates are visible) and right-click, the context menu that appears should include the option to "save as", and this is the one required. The "save as" dialog offers a choice of KML or KMZ file types, and MAXqda requires the KML type so select that first. Then use the pull-down menu to navigate to a convenient folder, probably the one where the main project file is stored or else the folder where other data for the project is stored, and finally apply a unique name for the file that will be identifiable when you come to use it.

It is also possible to store locations in the "places" section in Google Earth (using "save to my places" from the same context menu as above), and to create a folder system to organise such places in appropriate groups. We created a folder for each interview and stored all the locations marked for that interview in there. These locations will be loaded each time you open Google Earth and so can be accessed independently of any other programs. To create new folders in "my places", right-click in that section of the screen and select the "add / folder" menu option. Items can be moved around within the "my places" section using standard drag and drop procedures.

Creating locations within Google Earth (without using a GPS)

This set of guidance instructions is applicable when you want to start the locating process in Google Earth. For example we are aware of a research project that collects its data by examining images in Google Earth street view, looking for signs of a particular phenomenon, and marking the points where such signs have been observed.

In Google Earth use the navigation controls to move the image until the point you wish to mark is in the centre of the window. For fine tuning this position, zoom in closer and observe where the circular zoom focus marker appears as you do so (this indicates the centre of the display), also as the image comes closer to the ground the error of location diminishes, then zoom back out again until the view appears satisfactory. Either, use the menu option Add / Placemark, or the map pin icon on the toolbar (second from the left) to open the "new placemark" dialog.

If you have arranged the display correctly centred on the desired point, all that is essential in the dialog is a name label. However there are four tabbed parts to the dialog where you can store further details which will affect the way the point displays when you next fly to it. On the "style, colour" tab you can alter the way that the symbol and label will be displayed for this place in the future. On the "view" tab you can set the initial height, heading and tilt parameters for this place; these are normally adjusted with the navigation controls as you look at a place from different angles. On the "altitude" tab you can adjust the way the marker symbol links to the exact point, it doesn’t have to remain partly obscuring the detail but can be shown like a balloon tethered to the point by a fine string. Finally, click on the "OK" button to save the location in "my places". Further editing of the display settings for any stored place can be done by right-clicking on that point in "my places" and selecting the "properties" option from the context menu.

To use the point with MAXqda it is necessary to create a KML file. Select the point in "my places" and right-click, the context menu that appears should include the option to "save as", and this is the one required. The "save as" dialog offers a choice of KML or KMZ file types, and MAXqda requires the KML type so select that first. Then use the pull-down menu to navigate to a convenient folder, probably the one where the main project file is stored or else the folder where other data for the project is stored, and finally apply a unique name for the file that will be identifiable when you come to use it.

2. In MAXqda create Google Earth links using the KML files

When you have stored the required place details in KML files, it is then possible to use these as hyperlinks from within MAXqda. Open your project in MAXqda and open a relevant document, such as an interview transcript, in the document browser window. Scroll down the document until you find a passage of text (or other data) that relates to a location for which you have created a KML file.

The procedure for creating the link from MAXqda to Google Earth is very simple. Highlight a short passage of the text using similar techniques to those for marking a passage to be coded. But instead of selecting a code, use a right click and select the option "insert geolink" from the context menu that appears. In the dialog box that opens, navigate to the folder where you stored the KML files created at step 1 above, select the appropriate KML file identified by its file name/label, and click on the "open" button. Note that two or more passages of text can be linked to the same geolink KML file, but any one text passage can only be linked once, i.e. to one geolink KML file or using a text link to another segment of data.

Figure 1, below, shows how a geolink appears in a MAXqda text document. The linked text is shown in blue and underlined (in common with other hyperlinks in MAXqda) and a green disk icon can be seen in a separate margin to the left of the text.

Figure 1: A Google Earth hyperlink in MAXqda

A Google Earth hyperlink in MAXqda

The geolink works in two ways. If you hover the mouse pointer over the hyperlinked text then the place label and its latitude and longitude details appear in a pop-up box. If you click on the link then the Google Earth program will open and fly directly to that point. Provided you have sufficient screen space, you should be able to arrange the windows so that you can read your transcript (or other qualitative data) in MAXqda on one side and manipulate Google Earth to explore the visual aspects of that locality on the other side. 

Tips

  • We suggest that you put MAXqda on the left and Google Earth on the right of your screen so that the Google Earth navigation controls (zoom, tilt, pan and rotate) are always visible.
  • The KML files are used in the process of applying the geolinks, so it is important that they are not moved or edited once the links have been set up. MAXqda suggests that you store these KML files in a set location (along with any other external objects, such as videos or audio files). In the menu option “Project / Options” you can choose a location for a folder called “Backup externals” and this is where these files should be saved. Then, when the project has to be backed up or moved to another computer this folder should be backed up or moved as well.
  • We suggest that when you close Google Earth you should use the option given to discard all the items in the Temporary Places folder to prevent this becoming cluttered with all of the locations that you explore. There is no need to save these places again as the KML files in the MAXqda project make them always accessible.

3. Use Google Earth snapshot images in MAXqda

Once the geolinks have been set up they facilitate using all of the features in Google Earth to examine the appearance of a place side by side with the corresponding analysis of the qualitative data in your project. Google Earth always opens as a separate program beside MAXqda when a link is activated, so a careful arrangement of the screen allows the two types of data to be displayed side by side.

If you have retained audio files, say from interviews or focus groups, and have synchronised these with their transcripts in your project, then it can be particularly useful to listen to significant passages of the audio whilst using Google Earth to look at the place being discussed from a variety of heights and angles. In this way the comments made by research informants about specific locations may come to life more fully.

However, the connections that these hyperlinks represent only work in one direction, that is to say that they make it easy to virtually ‘fly’ to a place from the starting point of some data connected to that place, but there is no direct way of achieving the opposite effect, ie to start at a specific place and find all of the comments about it in various parts of the data. At present this can be partly achieved with a work-around which is illustrated in Figure 2 below.

Figure 2:  Using textlinks with a Google Earth snapshot

Using textlinks with a Google Earth snapshot

Image ©2010 Infoterra Ltd. and Bluesky (from Google Earth).

Google Earth can be used to store a static image, or ‘snapshot’, of an area, and such an image can be brought into a MAXqda project in a JPEG format file and used like any other photograph. This has been done as a preliminary process before Figure 2 was created. When the image file is opened in the document browser window, small areas of it can be marked with the mouse pointer by click and drag operations. As each rectangle is created, a right-click opens a context menu from which the option "insert text link" can be selected. You then open the relevant transcript document, scroll to the part of that where the particular location is discussed and mark a passage of text there as the other end of the text link. So, in Figure 2 above, several locations in one part of the study area have been linked to text segments in various interview transcripts. A click on any blue rectangle will jump the browser to that linked text in that transcript, and a further click there will jump back to the image shown. However, if you merely hover the cursor over a rectangle it changes to a yellow colour and the linked text appears in a pop-up box (as can be seen in Figure 2), and this creates a possibility to explore a series of segments selected according to their location in the image.

This work around has no automation assistance, every text link has to be created by the analyst before it can be used in the way suggested, so there is no way for the program to search for references to a location (other than by text searching for place names). Something similar could have been achieved without the geo linking facilities by applying a thematic code to texts where they refer to a particular area, and then retrieving all of those coded segments. But it is possible that the combination of the visual image of the area and the pop-up text boxes might help the analyst to ‘see’ patterns or connections that have a spatial aspect as well as a thematic aspect.

A small problem with this work around is that in MAXqda a text link can only connect one pair of objects. So it is difficult to see how this procedure will work, if several respondents have discussed the same exact location. It may then be necessary to use a thematic code to connect all of those comments. As can be seen in Figure 2, it is possible to apply a thematic code to each text link rectangle in the image, and then the facility to view all of the segments to which that code has been applied in the retrieved segments panel should allow them to be read in close proximity to the spatial image.

Summary

These features that link qualitative data in MAXqda to images of specific locations in Google Earth have only recently been developed. It seems likely that more advanced functions will be added in the future, so the examples above are probably just the beginning of the possibilities for these sorts of technological linkages. However these facilities will bring challenges for analytical imagination and rigour. What is exciting right now is that you can look quite closely at any place in the country without moving from your desk, and that you can do so in the context of other material connected to a particular place. The guidance above is intended to help researchers to set up their data in ways that maximise the potential advantages of such geographical juxtapositions.

Like MAXqda, the procedure involves storing key locations in Google Earth, saving these in special files accessible to a NVivo project, and then creating hyperlinks within the NVivo texts which will open Google Earth and ‘fly to’ the specified location whenever such a link is activated. The two programs (NVivo and Google Earth) can then be operated side by side, screen space permitting, so that the images of the location can be examined whilst the related text is being read.

1. Store locations in Google Earth and create KMZ files

The first stage of this process is carried out in Google Earth, so begin by opening that program. Please note that this is not the same thing as "Maps" in the mainstream Google Search or web page, but is a separate program that can be downloaded from the internet. Observe on the left-hand side of the Google Earth screen that there are three sections of navigation and display controls called "search", "places", and "layers" respectively.

It is likely that your locations will be identified in one of two possible ways; either you will have collected exact point parameters with a GPS instrument, or you intend to select points from within Google Earth by examining images and manipulating the screen display. Both of these methods are described below.

There is also a choice with Google Earth as to whether to store location details within that program, in the "my places" section as well as in your CAQDAS program, or only to store the location details in the CAQDAS program. Reference to this choice is also made below where it becomes applicable.

Taking locations from a GPS

There are several ways to get point locations from a GPS into Google Earth, it would be tedious to describe them all so we suggest a fairly general purpose method that is in no way specific to any particular equipment or software. In the interests of accuracy, you should attempt to avoid retyping latitude and longitude co-ordinates if possible and use copy and paste methods when moving data from one system to another. For this operation we suggest that there are good reasons for working with each point separately, mainly because we envisage that the main analysis working in a CAQDAS package will involve examining a single location at any one time.

In our example project we downloaded the GPS data file into its proprietary software on the analysis computer, then copied and pasted the specific waypoint data into a spreadsheet. Typically this data includes details such as date and time, altitude, and fields for descriptive data in addition to the co-ordinates that locate the point in the landscape. From the spreadsheet it was possible to copy the co-ordinates for one waypoint, paste them into the “Fly to” field in the Google Earth Search section, and then click on the search button to adjust the image so that it centred on the specified point.

To use the waypoint with NVivo it is necessary to create a KMZ file. If you move the mouse pointer over the lower part of the Search section (where your latest search co-ordinates are visible) and right-click, the context menu that appears should include the option to "save as", and this is the one required. The "save as" dialog offers a choice of KML or KMZ file types, and NVivo requires the KMZ type so select that first. Then use the pull-down menu to navigate to a convenient folder, probably the one where the main project file is stored or else the folder where other data for the project is stored, and finally apply a unique name for the file that will be identifiable when you come to use it.

It is also possible to store locations in the "places" section in Google Earth (using "save to my places" from the same context menu as above), and to create a folder system to organise such places in appropriate groups. We created a folder for each interview and stored all the locations marked for that interview in there. These locations will be loaded each time you open Google Earth and so can be accessed independently of any other programs. To create new folders in "my places", right-click in that section of the screen and select the "add / folder" menu option. Items can be moved around within the "my places" section using standard drag and drop procedures.

Creating locations within Google Earth (without using a GPS)

This set of guidance instructions is applicable when you want to start the locating process in Google Earth. For example we are aware of a research project that collects its data by examining images in Google Earth street view, looking for signs of a particular phenomenon, and marking the points where such signs have been observed.

In Google Earth use the navigation controls to move the image until the point you wish to mark is in the centre of the window. For fine tuning this position, zoom in closer and observe where the circular zoom focus marker appears as you do so (this indicates the centre of the display), also as the image comes closer to the ground the error of location diminishes, then zoom back out again until the view appears satisfactory. Either, use the menu option add / placemark, or the map pin icon on the toolbar (second from the left) to open the "new placemark" dialog.

If you have arranged the display correctly centred on the desired point, all that is essential in the dialog is a name label. However there are four tabbed parts to the dialog where you can store further details which will affect the way the point displays when you next fly to it. On the "style, colour" tab you can alter the way that the symbol and label will be displayed for this place in the future. On the "view" tab you can set the initial height, heading and tilt parameters for this place; these are normally adjusted with the navigation controls as you look at a place from different angles. On the "altitude" tab you can adjust the way the marker symbol links to the exact point, it doesn’t have to remain partly obscuring the detail but can be shown like a balloon tethered to the point by a fine string. Finally, a click on the "OK" button saves the location in "my places". Further editing of the display settings for any stored place can be done by right-clicking on that point in "my places" and selecting the "properties" option from the context menu.

To use the point with NVivo it is necessary to create a KMZ file. Select the point in "my places" and right-click, the context menu that appears should include the option to "save as", and this is the one required. The "save as" dialog offers a choice of KML or KMZ file types, and NVivo requires the KMZ type so select that first. Then use the pull-down menu to navigate to a convenient folder, probably the one where the main project file is stored or else the folder where other data for the project is stored, and finally apply a unique name for the file that will be identifiable when you come to use it.

2. In NVivo create Google Earth links using the KMZ files

When you have stored the required place details in KMZ files, it is then possible to use these to open Google Earth from within NVivo. It is likely that you will want to mark selected passages of text, that relate to specific places on the ground, in such a way that you can easily find the satellite or street view images of those places while thinking about the text.

The procedure for creating such links from NVivo to Google Earth is quite cumbersome, so the following guidance notes should be followed carefully until you have mastered the process. In outline you will need to create a separate "external" source containing the KMZ file for each place, then you will need to create a "see also link" from the selected passage of text to that external source. In order to use the link subsequently, it will be necessary to activate the "see also link" to open the external source and then to use a key-stroke combination to activate the KMZ file and use it in Google Earth.

Creating external sources: In NVivo click on the "sources" button in the navigation pane and then on the "externals" folder within that. If necessary create a sub-folder within "externals" for the collection of geo links, we have called ours "waypoints" (see Figure 1, below), and open that folder in the list view pane. From the "new" button on the main toolbar, or from the context menu for a right-click in the list view pane, select "new external in this folder" to open the dialog box shown in Figure 1. This dialog has two tabs, "general" and "external", which can be completed in any order. On the "general" tab (not shown in Figure 1) you need to enter the name for this source, this will appear in the List View pane when you complete the dialog. On the "external" tab (shown) leave the "type" field at its default setting of "file link" and click on the "browse" button in order to open another dialog and navigate to the folder where you stored the KMZ files when they were created in Google Earth, and select the file for this waypoint. Click "OK" and observe the new source appear in the list.

Figure 1: Creating an external source file in NVivo 8

Creating an external source file in NVivo 8

In Figure 1, above, a new external source for "WP96 Leisure Centre" is being created. The KMZ file is stored in a folder called "NVivo geo data" (see the file path within the dialog) and this source will be saved within the "waypoints" folder under "externals" (it will appear beneath "WP95 Community Centre" in the list view pane just behind the dialog box). It is possible to use different names for the KMZ file and the external source label but this may become confusing so we suggest using the same descriptive name for both.

When you click on the "OK" button to complete the "new external" dialog, that source will be ‘opened’ in the detail view panel of your NVivo working screen. At this stage there is no detail so it looks like a blank document. However later on, in order to make the program open Google Earth and fly to the place whose details are in the KMZ file, it will be necessary to use a double key-stroke command (which is hard to remember) so we suggest that you type that into this blank document, as shown in Figure 2, below and then it will be there as a reminder when you need it.

Figure 2: Add a reminder statement to each new external source geo link

Add a reminder statement to each new external source geo link

In Figure 2, above, note the text "Alt P + Alt F to open Google Earth", which has been typed into the otherwise blank source document.

Tip

Note that it is necessary to create a separate external source in NVivo for each waypoint in your data.

Creating "see also links": with the external sources set up, it is now time to turn to the data from which you will want to activate them. It is most likely that these will be in the "internal" sources of your project, so open that section in the list view panel. It will be possible to create a link from almost any type of qualitative data in your project to Google Earth, but we will use a textual transcript of an interview as a basic example.

Open a source document in the detailed view panel and scroll through it to find an appropriate passage that may be linked to one of the Google Earth places. Highlight the passage of text as though you are about to apply a code to it, then right click to show a context menu and select "links / see also link / new see also link..." from the successive menu lists that appear. This is illustrated in Figure 3, below.

Figure 3: See also Link menu hierarchy

See also link menu hierarchy

Clicking on the final step in Figure 3 brings up a new dialog box, as shown in Figure 4, below. In this dialog the upper section, "from...", will already be completed as a result of the highlighting of the text passage. The middle section, "to..." is the part that has to be filled in. The field "option" should default to "existing item" and this is correct (if you have already created the "external sources" as described above). So all that is required is to click on the "select" button, navigate in the "select project items" dialog that opens to the "sources / externals / waypoints" folder, and tick the box for the required waypoint link created earlier (in our example "WP95 Community Centre").

Figure 4: See also link dialog

See also link dialog

Finally, a click on the "OK" button completes the process of establishing the "see also" link. The highlighted text which has been linked in this way in the interview transcript is now shaded with a pink colour and a new section opens up at the bottom of the detailed view panel to list the "see also" links in the opened source document. These effects are shown in Figure 5, below.

Figure 5: How "see also" links appear in the detailed view panel

How "see also" links appear in the detailed view panel

The pink colouring is always visible, indicating the presence of a "see also" link, but the highlighted text is not an active link itself. To make the link work you have to select it from the list in the lower part of the detailed view. (If the "see also links" section is not showing it can be opened with an icon on the "view" toolbar). A single click on an item in the "see also links" section causes the pink highlighting to change to a brighter red colour and the document to display the linked passage, permitting a check to be made that the correct link has been selected. A double-click on such an item (or a right-click context menu option) causes the link to the external file to be opened – this will be a blank document, unless you have followed our suggestion of adding the double key-stroke reminder above (see Figure 2).

From the linked external source that has opened you have to apply the key strokes Alt with P, followed by Alt with F (upper or lower case are both OK). This will open Google Earth as a separate program in front of NVivo and fly directly to the location stored in the applicable KMZ file. That location will also be added to the "temporary places" folder inside Google Earth. Provided you have sufficient screen space, you should be able to arrange the windows so that you can read your transcript (or other qualitative data) in NVivo on one side and manipulate Google Earth to explore the visual aspects of that locality on the other side.

Tip

  • We suggest that you put NVivo on the left and Google Earth on the right of your screen so that the Google Earth navigation controls (zoom, tilt, pan and rotate) are always visible.
  • We suggest that when you close Google Earth you should use the option given to discard all the items in the temporary places folder to prevent this becoming cluttered with all of the locations that you explore. There is no need to save these places again as the KMZ files and external sources in the NVivo project make them always accessible.

3. Notes for NVivo v9 users

The above procedures are essentially the same in NVivo v9. However the way some of the options are displayed has changed with the ‘ribbon’ format of menus and toolbars.

The option to create the new external source can be found on the "create" ribbon as option "external". It then works very much as described above, although an additional "attributes" tab has been added which can be ignored for this particular application. Note that you will not find this option under the "external data" ribbon.

The option to display the "see also links" section in the detailed view panel will be found on the "view" ribbon beside its own check box.

Summary

These features that link qualitative data in NVivo to images of specific locations in Google Earth have only recently been developed. It seems likely that more advanced functions will be added in the future, so the examples above are probably just the beginning of the possibilities for these sorts of technological linkages. However these facilities will bring challenges for analytical imagination and rigour. What is exciting right now is that you can look quite closely at any place in the country without moving from your desk, and that you can do so in the context of other material connected to a particular place. The guidance above is intended to help researchers to set up their data in ways that maximise the potential advantages of such geographical juxtapositions.

Using audio clips and Google Earth

This section illustrates some possible techniques which can be used in CAQDAS software to enhance the integration of qualitative data with geo-spatial data, in particular, with location images in Google Earth. A basic underlying principle is that, although a research analyst can only look at one thing at a time, it is possible to look at one thing whilst simultaneously listening to another. Thus facilities that help the analyst to listen to relevant clips from audio recordings of interviews at the same time as viewing detailed images of the place being discussed in those clips should help generate deeper insights about those places than would be possible from reading transcripts and viewing images together.

The functions and processes described below are illustrated with examples from the same pilot study as was used in the page of guidance about linking textual data to Google Earth. More details about the data collection methods can be found in the exemplar project description, and mobile interviews.

1. Synchronising digital audio and transcript text in ATLAS.ti

In order to create the possibility of synchronising a digital audio file with its transcript it is essential that the transcript contains frequent time-stamp data. These are identifiers of precise time points which can be interpreted by ATLAS.ti.

Along with the publishers of ATLAS.ti, we would recommend that transcription is done using a freeware program called "F4". This readily accessible program can be downloaded from the internet and used straight away, it is quite simple to use and has good functionality with variable playback speeds, adjustable spooling intervals (the step back on the audio each time you stop the playback to re-listen to the last bit of audio) and programmable key-strokes for frequent words (such as respondent identifiers). If required a foot-pedal can be purchased to use with F4, but it can be used quite effectively without that by using the function key 4 to stop and start playback, hence the name.

Tip

It is advisable to set up your transcription with the audio file and the transcript file in the same folder as that where your other ATLAS.ti data is stored, probably the "textbank" folder, before starting to transcribe. If you are using someone else to transcribe, ask them to keep the audio and transcript files in the same folder so that they can be copied together into your ATLAS data folder.

When the transcription work is complete you bring this data into an ATLAS.ti project in a different way to other data. For this process use the menu option "A-docs / import F4 document" from the main menu bar (see Figure 1 below). In the navigation dialog that opens from this option you select the transcript file, in RTF format, for the data that you wish to use. The program will detect the links with the audio file automatically, and it will add two new primary documents (PD) to your project, one for the transcript and one for the audio file. The new transcript PD will have a ‘memo’ icon in the primary document manager while the audio file has a musical note icon. The transcript file will actually be stored within the hermeneutic unit (HU) or project file, like other memos. This is unusual for ATLAS.ti, where normally text documents that have been assigned to the HU are accessed in their original location and so are stored outside the HU. So the transcript memos are inside the HU while the audio "WMA" files are stored outside the HU.

Figure 1, below, shows an example of a transcript memo that has been opened in an ATLAS.ti project and partially coded. The red dots visible in this image indicate the presence of time-stamps (when this was transcribed the settings in F4 applied a time-stamp to the end of every paragraph and the beginning of the next paragraph). Apart from the red dots and the memo icon in the PD Manager, this document works like any other text document in ATLAS.ti, so quotations can be marked and coded as normal.

Figure 1: F4 Transcript imported into ATLAS.ti with synchronised audio

Image removed.

 

2. Listening to specific sections of the audio

Figure 1, above, also shows the set of options in the A-docs menu, and these include important functions for working with the audio itself. These should be explored and experimented with until you understand how to control the audio links effectively, but here are some notes to help you.

There are three possible ‘modes’ for the audio: the default, "synchro node", and "karaoke". Some functions work differently in each of these modes and so you need to be aware of which mode you are in. Initially we would recommend that you use the A-docs menu to change modes where you have a visual cue (with ticks replacing the icons beside the relevant mode) to show the current mode, later you can use the short-cut key-strokes when you are familiar with their effects.

With the default mode there is just one audio function – if you highlight a segment of text (either directly with the mouse or indirectly by selecting an existing quotation) and press Ctrl+P (or use the menu option "A-docs / play selected text") you should hear the source audio that matches the passage you have highlighted. Once you have started playing such a passage you cannot stop the audio until it has finished that segment.

Tip

You may notice some inaccuracy of the start and stop points in this process. This may be caused by inaccuracy when the time stamps were made in the transcription process (hitting the return key at the wrong moment), or by problems of interpolation if the passage you have highlighted starts or stops quite far away from a time-stamp. If this is a problem, reselect the passage after allowing for such differences by varying the start and stop points in the text appropriately.

When you turn on "synchro mode" (from the A-docs menu, or with F3) you can control the audio playback from the text, and once you have played some audio you will see a window with audio control buttons overlay the main screen. This is illustrated in Figure 2, below. You do not need to highlight a passage of text to play in this mode. Simply click to place the cursor at the point in the text where you would like to start the playback, and then press F4 (function key 4). The audio will start playing from that point and will continue until you either press F4 again or click on the pause or stop buttons in the audio controls window.

Figure 2: Audio controls window in synchro mode

Image removed.

 

From synchro mode you can turn on "karaoke" mode (either from the A-Docs menu or with Ctrl+F4). In this mode you use the audio controls to select a playback point and the text will scroll to that point and then be highlighted to show what is currently being played back. Move the slider button in the audio control window in either direction to select the next playback point and press F4 or the play control to start the audio, the time at the current point is shown in bottom right corner of the window (showing "1:02:20.54" in Figure 2).

In both synchro mode and karaoke mode the audio will continue to play until you stop it with F4 or the pause or stop buttons in the audio control window. In karaoke mode the text in the main window will scroll automatically to continue displaying the text for the current audio being played until the audio is stopped.

What is really useful in ATLAS.ti is that, with these controls, you can mark audio quotations and use them as separate sound ‘clips’ in other parts of the program. An audio quotation is the equivalent of any other type of quotation, so it can be coded and/or hyperlinked. The advantages of this will be discussed below, but first we need to look at the process of creating an audio quotation.

It is probably best to use synchro mode for making audio quotations because the text transcript will be useful for locating the approximate time for that segment. Choose a passage of transcript for which you would like to create the audio quotation, and start the playback a little before that. Keep the audio running and at the appropriate moments as you listen to it, click on the start and end buttons in the audio control window. These are shown in Figure 3, the start button has a darker wedge to the left, and the end button has a darker wedge to the right. The rectangular button is the commit button that actually creates the quotation from the two marks. (So, if you get either end wrong you can close the audio down and start again without actually making that quotation.)

Figure 3: Quotation buttons in audio control window

Image removed.

In Figure 3, above, a quotation has been marked but not committed. The two tiny markers above the slider bar show the location of the start and stop marks against the time line for the whole interview. Above that are three boxes showing the elapsed time to the start marker, to the finished marker, and the length of the quotation (here it lasts for 33.10 seconds, ending at 1hr 2mins 18.99 seconds). If you are happy that the marks were both at the right moments, click on the rectangle button beside the start and stop marker buttons to create the quotation (this is the equivalent of the inverted commas button in the left margin toolbar for making text quotations).

Tip

After creating a quotation in this way it is a good idea to rename it in the Quotation Manager. By default the quotation name is merely the name of the audio file, and if you make several quotations from the same interview these will all have the same name which will make them difficult to use. You could use the first few words of the clip as the name, which would be similar to the way text quotations are named by default, or you could use a descriptive phrase which reminds you of what that clip contains.

The new quotation will be listed as the last item in the quotation manager (if that hasn’t been re-sorted) or else listed by the PD number of the audio file (not the transcript document number). A double-click on an audio quotation in the quotation manager causes it to be played in full, so you can confirm that the start and stop points are correct.

Within the quotation manager, a right-click on an audio quotation generates a context menu from which you can apply thematic codes or make hyperlinks to other quotations. It is not necessary to hyperlink the audio quotation to a text quotation for the same passage of transcript as they are already linked by the synchronisation system.

3. Using audio in ATLAS.ti and viewing images in Google Earth together

We have identified two main uses for audio quotations when they relate to precise locations that can be identified in Google Earth, by hyperlinking to locations marked on a Google Earth Snapshot image, or by assembling a series of such quotations in a Network model and playing them from there.

To use audio quotations with Google Earth snapshot images you should refer to the linking to Google Earth in ATLAS.ti guidance. In that section we suggest ways of storing a Google Earth image and using it as data which can be marked up an hyperlinked to other elements of your data. It might be useful to create a new relationship for links to audio quotations, possibly "plays audio", to use with the links you can now make from the snapshot location to the audio that describes it.

When you have created such links, you can study the snapshot image and play clips from the interviews that relate to identified locations within it. Of course all of these connections have to be created by the analyst before they can be used in this way, so this is not going to be a process of discovery but rather one of enrichment of the data.

We think that the network model provides a more powerful form of analytic assistance. In our pilot project we conducted several mobile interviews in the same urban area, so there were many overlaps of location amongst these. For an important specific location we created audio quotations for all of the comments about that particular place that were made by our respondents. As each of these quotations was created we applied the thematic code that identified that place to it. When all of these quotations had been created, we opened a new network for that place code from the code manager by selecting that code, right-clicking on it, and selecting "open network view" from the context menu. This opens a new network window with the desired code already in it. In the network a right-click on the code label offers the option "import neighbors" and this adds all of the quotations linked to that code as a cascade of icons sloping down to the right. By clicking and dragging each of these icons can be moved around the network view and arranged at your convenience for further examination.

The audio quotations can be identified by the loudspeaker symbol in the top left corner of the label. In this window a double-click on an audio quotation causes it to be played directly. Thus each of the conversations about that particular place could be played-back in quick succession to each other, simply be double-clicking on each loudspeaker label.

If Google Earth is now opened outside ATLAS.ti, ie not by selecting a Google Earth PD but by using a hyperlink to a Google Earth quotation, so that it can be viewed to one side of the screen while the network can be seen within ATLAS.ti on the other side of the screen, then both programs can be worked together. Select an audio quotation in ATLAS.ti and start it playing then, while the audio continues to play, use the Google Earth controls to fly around that location, panning, rotating and zooming-in as you like, maybe even go into street view and look at ground-level images of the place. When the first audio quotation finishes select another in the ATLAS.ti side of the screen and continue exploring the location in Google Earth. This can be very powerful. In our experience you can observe features in the images that were missed during the interviews and hear undertones in the conversation that were not noted at the time and are not apparent in the transcript. Your brain can hear and see at the same time and identify conjunctions between those different types of data. And the ATLAS.ti network allows you to bring together all of your respondents to form a kind of “virtual crowd” discussing the place of interest.

Summary

It takes a certain investment of time and effort to master the techniques required, and to assemble the data in the optimum format to make these juxtapositions of sounds and images. However, when this is done then it should become possible to obtain much richer insights into the experiences your respondents have described in the places discussed.

The "virtual crowd" technique can be so evocative that there is a strong temptation to use it as a presentation device for disseminating the results of your research. What could be more convincing for a potential audience than to hear the original voices and see the actual places that they talk about? However you should pause to consider the ethical aspects of such a presentation. The combination of actual recorded voices with images of a precise location to which those respondents have a connection creates the possibility of accidentally revealing the identity of a respondent which would not exist with simple anonymised written quotations. The risk of such a breach of research ethics increases if you are presenting the research to other people with a connection to the same area, which may be precisely the audience that needs the impact of the raw data to appreciate your findings. So we advise that great care and attention should be given to any decision to use these techniques outside the analytical phase of your project, but within the project team there should be no need to hold back.

1. Synchronising digital audio and transcript text in MAXqda

In order to create the possibility of synchronising a digital audio file with its transcript it is essential that the transcript contains frequent time-stamp data. These are identifiers of precise time points which can be interpreted by MAXqda.

Along with the publishers of MAXqda, we would recommend that transcription is done using a freeware program called "F4". This readily accessible program can be downloaded from the internet and used straight away, it is quite simple to use and has good functionality with variable playback speeds, adjustable spooling intervals (the step back on the audio each time you stop the playback to re-listen to the last bit of audio) and programmable key-strokes for frequent words (such as respondent identifiers).  If required a foot-pedal can be purchased to use with F4, but it can be used quite effectively without that by using the function key 4 to stop and start playback, hence the name.

Tip

  • It is advisable to set up your transcription with the audio file and the transcript file in the same folder as that where your other MAXqda data is stored, probably the "externals" folder, before starting to transcribe. If you are using someone else to transcribe, ask them to keep the audio and transcript files in the same folder so that they can be copied together into your MAXqda data folder.
  • The F4 program allows you to choose where to place the time-stamps, we have found that the synchronisation works best in MAXqda when they are placed at the end of each paragraph or speech.

When the time comes to bring the transcript into your MAXqda project, use the same routine as for any other data, documents / import document(s), and navigate to the location where you have stored the transcript file. MAXqda should detect the link to the audio file (if you have used F4) and will show a special icon in the document system window to indicate the connection, a musical note in front of the document icon (see Figure 1 below).

Tip

You can check that the transcript is linking to the correct audio file by right clicking on the transcript in the document system window and selecting "properties" from the context menu. In the properties window look at the line for "media file" in the "multimedia" section, and check the full path and file name for the audio file that has been linked. If you have had to move files around after doing the transcription this might be showing an incorrect path, you can edit it by clicking on that part of the window showing the current path. A small box with three dots appears at the extreme right of the window and clicking on this opens a navigation dialog in which you can locate the correct version of the audio file to be linked to the transcript. Complete the operation with OKs as required.

Figure 1, below, shows an extract from a working screen in MAXqda with several linked transcripts grouped under the heading "interviews" in the document system. Note in the document browser window that there is an extra column between the paragraph numbers and the text with clock symbols and graduated shading, this is indicating passages between time-stamps.

Figure 1: Transcript imported into MAXqda with synchronised audio file

Image removed.

Note also in Figure 1, above, the toolbar icons on the "media player" toolbar, the one just above the document system window in this illustration. The first, a red flag, activates the media player – when this is clicked "off" the time-stamp margin is not visible, so if you cannot see these clock symbols in your project try activating the media player with this button. Further along this toolbar are standard media control buttons to start and stop play-back, fast forward or back, and time counters showing the current elapsed time in the file and the total time of the audio file (in this illustration 1 hour, 2 minutes, and 53 seconds).

2. Listening to specific sections of the audio

In MAXqda there are two modes for media play-back, synchronisation on or off. These are controlled by the third icon along the Media player toolbar shown in Figure 1, when the icon is highlighted as shown here then synchronisation mode is "on".

With synchronisation mode "off", it is possible to listen to the audio for a particular paragraph in the transcript. Simply scroll down the transcript in the document browser window to the desired paragraph and click on the clock symbol in the margin beside it. The text of that paragraph will be highlighted, the shading of the media player margin changes colour for that paragraph, and the audio starts playing from the time-stamp at the end of the previous paragraph. The audio will not stop at the end of the paragraph but will continue until you press F4 on the keyboard, or the pause/play button on the media player toolbar on screen. When playback continues into the next paragraph the text highlighting stays unchanged on the paragraph where you started playback, but the subtle colour change in the media player margin continues to adjust and indicates the current paragraph being played.

With synchronisation mode "on", the playback of a particular paragraph works in a similar way to that described above, but the text highlighting also moves with the audio as subsequent paragraphs are played back. However, the second button on the media player toolbar (see Figure 1, above) can be used to open another window with a list of timestamps – see Figure 2, below.

Figure 2: Timestamps list window with synchronisation "on"

Image removed.

The timestamps window that has been opened in Figure 2 shows a list of all the timestamps in the current transcript that is open in the document browser window, with the beginning and ending times for each. There is an additional column in which users can record brief comments about that segment of the audio – in this illustration we have added "graffiti on community centre" against the passage that started at 35 minutes and 5 seconds. A double-click on the clock symbol at the left in the timestamps window will start audio playback at that beginning time and, because this only works with synchronisation mode "on", the equivalent part of the transcript will also be highlighted in the document browser window. Once again playback will continue until you stop it with F4 or the pause/play button in the media player toolbar.

At the time of writing, there are no facilities to apply codes to the comments in the timestamps list, although there is a search function that is activated with a right-click on the column header bar in that window. So access to specific passages of audio will mainly be achieved through the transcript texts themselves, or through the elapsed time value in the timestamps window.

3. Using audio in MAXqda and viewing images in Google Earth together

We have found that, because Google Earth is such a visual tool, it is difficult to read transcript texts closely whilst looking at images of the places being discussed and also think about how those different types of data relate to each other. However, when separate passages of the interview audio files are listened to then it can be rewarding to use Google Earth at the same time since the brain is well-used to looking and listening simultaneously.

In MAXqda the audio passages have to be used in conjunction with the transcribed texts so we suggest that the following procedures may be useful when trying to achieve the juxtaposition of data just described.

Create a thematic code for each key location in your data and apply that code to the passages of transcript text that discuss, or possibly that took place at, that location. Then, by activating all of the transcript documents and the code for one place only you will display all of those passages of text in the Retrieved Segments window. Working from the retrieved segments, click on a passage in that window to display it in the Document Browser above, and then click on the timestamp icon in the margin beside the highlighted text to start audio playback for that passage. Use F4 or the play / pause button to stop the audio playing and then select another segment in the retrieved segments window to make that available in the document browser, from where its audio too can be played-back. In this way, with a little practice, you will find that you can listen to a series of audio passages that are related to one particular place with only a few mouse clicks between each.

Now, open Google Earth at the location for which you have selected the code (possibly by using a Google Earth hyperlink in the document browser at one of the activated segments) and arrange the screen so that Google Earth is visible on the right and MAXqda on the left. By moving between the two programs you should be able to explore the images in Google Earth and street view at the same time as listening to your respondents talking about that place. This can be very powerful when the Google Earth navigation controls to rotate, pan and zoom the images are used while the audio is playing. In our experience you can observe features in the images that were missed during the interviews and hear undertones in the conversation that were not noted at the time and are not apparent in the transcript. Your brain can hear and see at the same time and identify conjunctions between those different types of data.

This is not a seamless operation as each separate passage of audio still has to be selected from the retrieved segments window and played from the document browser window but, because these are both visible on screen together this is fairly straightforward to do.

In some situations it may also be useful to use a related technique with segments of text that have been hyperlinked to a Google Earth snapshot image stored within the MAXqda project (as described in the linking MAXqda to Google Earth section). Whilst there is currently no way of playing the audio and looking at the snapshot simultaneously, the hyperlinks make it possible to view and listen sequentially. So, by starting the process with the marked-up snapshot, use a hyperlink to jump to a transcript passage about that location, use the time-stamp icon to play the audio of that passage (because a single passage can have a hyperlink and also be synchronised with its audio), and then use the hyperlink to jump back to the snapshot view of the area, from where you can select another hyperlinked location and repeat the procedure. In this way you could view and listen alternately with a minimal number of mouse clicks being required between each activity.

Summary

It takes a certain investment of time and effort to master the techniques required, and to assemble the data in the optimum format to make these juxtapositions of sounds and images. However, when this is done then it should become possible to obtain much richer insights into the experiences your respondents have described in the places discussed.

With preparation and practice these techniques can be so evocative that there is a strong temptation to use them as presentation devices for disseminating the results of your research. What could be more convincing for a potential audience than to hear the original voices and see the actual places that they talk about? However you should pause to consider the ethical aspects of such a presentation. The combination of actual recorded voices with images of a precise location to which those respondents have a connection creates the possibility of accidentally revealing the identity of a respondent which would not exist with simple anonymised written quotations. The risk of such a breach of research ethics increases if you are presenting the research to other people with a connection to the same area, which may be precisely the audience that needs the impact of the raw data to appreciate your findings. So we advise that great care and attention should be given to any decision to use these techniques outside the analytical phase of your project, but within the project team there should be no need to hold back.

Further reading

Brown, L. & Durrheim, K. (2009) Different Kinds of Knowing – Generating Qualitative Data Through Mobile InterviewingQualitative Inquiry 15 (5), pp 911-30.

Hall, T., Lashua, B. & Coffey, A. (2008) Sound and the Everyday in Qualitative ResearchQualitative Inquiry 14 (6), pp 1019-40.

Fincham, B., McGuinness, M. & Murray, L. (2010). Mobile Methodologies. Basingstoke, UK, Palgrave Macmillan.

Sturgis, P. (2008) Designing samples, In Gilbert, N. Researching Social Life (3rd Edition). London, Sage Publications Ltd.