Skip to content

Blacklight User Interface Research

megmcmahon21 edited this page May 29, 2025 · 11 revisions

This page is a collection of known reports evaluating aspects of Blacklight’s user interface. Reports using various research methods are appropriate to include here (e.g., usability tests, accessibility evaluations, heuristic inspections, surveys, web analytics, etc.).

Have reports to share? Please add them here directly if you're able. Else, reach out on Code4Lib Slack #blacklight channel or the blacklight-development listserv.


Study: Blacklight Search UX (2024)

https://crkn-rcdr.gitbook.io/user-centered-design/study-blacklight-search-ux

  • Authors: Brittny Lapierre, Canadian Research Knowledge Network / CRKN-RCDR
  • Methods: Usability Testing (24 remote/unmoderated tests); Questionnaire
  • Platform: Blacklight 8 default UI, branded for CRKN's Canadiana collection

Aspects Evaluated

  • basic keyword searching
  • results per page element
  • sort results by element
  • facets: expanding facets that are collapsed by default
  • facets: in-modal navigation of all values for a given facet
  • date range facets; entering From & To values
  • removing applied filters; Reset Search

Key Findings / Recommendations

  • Facet Modals. Task success rate decreased the more modals were involved. Scrolling the main window was not disabled when the modal/popup was active (likely bug). Scrolling within the actual modal was confusing/frustrating. Only 50% success rate on a task involving a facet sorted by count, when the target value was not in the first page of values. 33% of users clicked the A-Z sort button during that task to try to find the value. Next/previous links were hard to find and frustrating to have to use. A-Z links were expected especially when sorted alphabetically.
  • Search Box. When tasks could be completed by either scoping the main search box to a particular field or narrowing by that field's possible values via its facet (find results matching a particular Author or Subject), slightly less than half of users (45.8% for Subject; 37.5% for Author) opted to use the search box. Out of these users, only about half (63.4% Subject; 44.44%) used the field dropdown to scope their search.
  • Results per Page. 91% success rate, but unanimous confusion about whether the element actually worked, since no loading indicator was presented while the results were re-rendering.
  • Loading Indicators. Add a loading indicator on the search page that displays when the page sends a request to the back end for search results. This could be triggered by any of the sorting options, the display count, or when entering new search terms or facets.
  • Removing All Applied Filters. 96% task success rate, but via different paths. Most users clicked X on the individual applied constraints at the top. Some (16.67%) clicked X on the individual applied constraints in the facet sidebar. Only 30% of users who noticed the top constraints section clicked the Reset Search button.
  • Sorting. 100% success rate for re-sorting results by "newest to oldest." Ensure "oldest to newest" is also an option.
  • Facet Labels. Ensure facet labels are easy to understand and differentiate. A facet title like "Collections" may be unintuitive.
  • Date Range Facet. Place the date range inputs directly under the Date Range facet header.

The American Congress Digital Archives Portal Project White Paper (2022)

https://researchrepository.wvu.edu/faculty_publications/3090/ (Appendix D: usability report)

  • Author: Danielle Emerling (West Virginia University)
  • Methods: Usability Testing (20 remote/moderated tests); Questionnaires
  • Platform: American Congress Digital Archives Portal / custom Samvera/non-Hyrax
  • Blacklight Version: 6.x

Aspects Evaluated

  • usability of the web interface design, information flow, and information architecture
  • finding a document about a particular organization
  • finding files created by or about a particular person
  • finding files related to a particular subject

Key Findings / Recommendations

  • Overall. Most participants found the site clean, well organized, and intuitive; search limiters (facets) and search bar most valuable features. Overall satisfaction rating of 87%. The three tasks had 85%, 95%, 100% success rate.
  • Search Box & Facets. Some users didn't realize that the facets and search bar could be used together to refine their search results.
  • Sort Options. Participants noted being able to sort results by date or media type would be useful.
  • Advanced Search. Several users felt it would be useful to add (not present at time of testing).
  • Search with Quotation Marks. Some users attempted searching using quotations, which yielded zero results. Recommended supporting phrase searching using quotes.
  • Stacking Facets. Some users found the ability to stack search limiters to be useful, whereas others found this feature to be cumbersome. Some suggested that search limiters should refresh for new searches.
  • Metadata on Search Results. Some users felt there was too much metadata shown for each search result, decreasing scanablility. Others felt it was adequate.
  • Facet Label Jargon. Several users were confused by the naming conventions of search limiters (e.g., coverage, spatial). Review labels for clarity.

ArcLight

ArcLight Accessibility Usability Insights (2024)

  • Authors: Meg McMahon, Harvard University
  • Methods: Remote Moderated Usability Testing (n=1)
  • Platform: University of Michigan ArcLight Instances 1.x

Aspects Evaluated

  • screenreader evaluation of the platform

Key Findings / Recommendations

High-Priority Findings

  • Overall Accessibility and Structure. The participant found the Arclight system highly accessible and easy to navigate due to its strong heading structure and logical layout.
    • Recommendation: Maintain and expand use of well-structured headings to support screenreader navigation.
  • Request Workflow. The process of locating and using the request button was clear and seamless for the participant.
    • Recommendation: Retain current placement and labeling of request buttons.
  • Breadcrumb Navigation. Breadcrumbs proved essential for efficient navigation within deep collection structures.
    • Recommendation: Continue use of breadcrumb navigation for orientation and wayfinding.
  • Online Content Identification. The participant was able to reliably identify and access online materials once familiar with the context.
    • Recommendation: Retain clear markers for online content but address language inconsistencies (see Low-Priority findings). Medium-Priority Findings
  • Missing Status Updates After Filtering or Navigation. After applying filters or selecting series links, the participant expected a status announcement confirming their action—none was provided.
    • Recommendation: Introduce ARIA live region announcements to confirm changes in filtered views or navigation states.
  • Focus Jumping to Bottom on Page Reloads. Selecting a series or item within a collection redirected focus to the bottom of the page rather than the top, disorienting the participant.
    • Recommendation: Ensure page reloads land focus at the top of new content for screenreader users. Low-Priority Findings
  • Search Result Highlights Read Aloud. Screenreader announces highlight text in results, which was perceived as extraneous and unhelpful.
    • Recommendation: Mark highlight elements as decorative using aria-hidden="true" or similar.
  • Redundant Dropdown Language. The first dropdown filter option (e.g., language) is read aloud multiple times unnecessarily.
    • Recommendation: Streamline ARIA labels to prevent duplication.
  • Ambiguous Online Content Language. The term “graphic online” was confusing and inaccurate as a label for online availability
    • Recommendation: Replace with more descriptive language like “Available Online” or “Digital Access.”
  • Confusion Around Online Access vs. Restrictions. Participant was unclear whether items labeled as online required librarian approval due to adjacent restriction messaging.
    • Recommendation: Clarify when access is immediate versus restricted; align restriction messages with digital access status.

ArcLight Finding Aid Presentation Usability Insights (2023)

  • Authors: Meg McMahon, Harvard University
  • Methods: Remote Moderated Usability Testing (n=14)
  • Platform: Duke University and University of Michigan ArcLight Instances 1.x

Aspects Evaluated

  • default presentation of search results
  • collection-level metadata and content pages
  • digital material identification and iconography
  • navigation (side and main)
  • restriction messaging
  • box/folder visibility
  • special collections request workflows
  • PDF finding aid usage

Key Findings / Recommendations

  • Grouped by Collection (Search Results) Participants across institutions preferred the “Grouped by Collection” default for search results. Although it took a moment to interpret, it aligned with the researcher's expectations and improved content scannability.
    • Recommendation: Use "Grouped by Collection" as the default results view.
  • Collection Pages Participants appreciated the breadth of information provided at the collection level, particularly summaries, citations, subject tags, library location, content warnings, and researcher guidance.
    • Recommendation: Maintain a rich collection overview with citation and access context prominently featured.
  • Identification of Digital Materials Participants welcomed digital content but struggled to identify it when iconography was unclear, particularly when icons overlapped (e.g., “digital” + “restricted”) or terms like “selected digital content” were used.
    • Recommendation: Use clear, distinct labeling for digital content. Avoid ambiguous phrases and conflicting icons.
  • Iconography Confusion The use of icons (especially restriction icons) was consistently confusing across both implementations. Even with hover text, many icons are misinterpreted.
    • Recommendation: Redesign icons with more apparent visual meaning and consider brief, inline text for critical labels, such as “Restricted.”
  • Restriction Messaging Messaging regarding restricted content, particularly digital items, was often vague or inconsistent. Participants were unsure what was restricted or how access worked.
    • Recommendation: Clarify restriction messaging and show it clearly at both the collection and item levels.
  • Box and Folder Visibility Users expressed frustration when box/folder information was deeply buried within the interface. When this information was visible, it added valuable clarity.
    • Recommendation: Prioritize box/folder data earlier in the finding aid experience to support better scannability and citation.
  • Navigation Complexity Layered navigation, mixed symbols, and inconsistent linking behavior caused confusion among users. Phrases like “four entries” were often misread as limiting rather than structural.
    • Recommendation: Simplify navigation and align language with the archival hierarchy. Keep the side navigation and breadcrumbs, but clarify the levels and structure.
“I thought there were only four items, not four subseries.” — UM Library Staff.
  • Special Collections Request Workflow Participants expected a request workflow similar to e-commerce experiences, with explicit confirmation, item-to-box mapping, and success states.
    • Recommendation: Implement cart feedback to clarify what is being requested, especially when selecting individual items within a larger container.
  • PDF Finding Aid Usage Participants used PDF finding aids not as replacements, but as supplemental tools, particularly for offline use or personal note-taking.
    • Recommendation: Offer downloadable PDFs where possible and recognize their value in user workflows.

ArcLight remote usability test (2020)

https://dukespace.lib.duke.edu/items/ce8380b4-757c-4df6-8383-15ce64e6efdc

  • Authors: Candice Wang, Anh Nguyen, Joyce Chapman (Duke)
  • Methods: Usability Testing (10 remote/moderated tests); Questionnaire
  • Platform: ArcLight 0.x with custom layout / Duke University Archives & Manuscripts
  • Blacklight Version: 7.5.0

Aspects Evaluated

  • general page layout intuitiveness (search results, collection pages, component pages)
  • requesting materials from a specific box
  • “search within collection” vs. “search across collections” scopes
  • component context / breadcrumb trail
  • hierarchical nav tree vs. contents list
  • locating digital content

Key Findings / Recommendations

  • Page Layout. Mostly intuitive in funneling users to what they need to see; appreciation for having multiple ways to search for and get to the same material. 100% success rate on task finding a given collection using main search box and locating relevant components in it.
  • Requesting a Box. Only 30% success rate overall. Counterintuitive and confusing workflow: see a container list in arclight UI > click Request (leave arclight UI for separate request system) > read list of collection's requestable containers and remember which one was relevant. Request action should lead directly to the relevant container, else the request system UI should list the contents.
  • Search Box Scope. Almost all users successfully changed the search scope dropdown from this collection to all collections when beginning on a component page and tasked with searching for materials in other collections.
  • Component Context. Users all used the breadcrumb trail to locate parent collection context for a component. Some confusion noted over archival terminology ("collection" and "series") even among experienced users.
  • Sidebar Nav Tree. When tasked with finding a different component within the same collection, 80% of novice users interacted with the tree unprompted. Yet 100% of experienced users opted first for the search box and only used the tree when prompted. Many users struggled to recognize the show more expander in the tree.
  • Locating Digital Content. 100% task success rate, but via different strategies. 60% used an Advanced Search filter; 40% used the Online Access facet, which was the final facet at the bottom. Recommended giving this facet more prominence.

GeoBlacklight

GeoBlacklight Usability Insights (2023)

  • Authors: Meg McMahon, Harvard University
  • Methods: Remote Moderated Usability Testing (n=9)
  • Platform: GeoBlacklight 3.x, branded for Harvard University

Aspects Evaluated

  • homepage design and data group selection
  • leaflet map on the search results page
  • resource metadata (e.g., dates and authority)

Key Findings / Recommendations

  • Homepage Data Grouping Participants were confused by the four boxes used to pre-filter search results. They questioned why these groups were chosen and what the categories represented. It is recommended to add a brief explainer text describing the significance of the groups chosen.
  • Leaflet Map (Search Results): Initially confusing, the map proved helpful once explored. Participants weren’t sure how it contributed to their search. It is recommended to provide brief guidance text near the map, describing its functionality and how it can be used to navigate search results.
  • Authority of Sources Participants—especially advanced researchers—wanted more transparency around the origin of maps and datasets. This included who created them and any associated institutions. It is recommended to add citation trails or bibliographic metadata to convey provenance and reduce uncertainty about bias or credibility.
    • “I'm not exactly sure who this George Long guy is... but I think, having a trail of bibliography would help me understand that a little bit better.” — Archaeology PhD Candidate.
  • Grouping Similar Maps Participants noticed that similar maps appeared multiple times in search results for different years. It is recommended to consider grouping identical maps across years under a single search result with a selectable year range to streamline browsing and improve search clarity.
  • Date Field Metadata Users were often unsure if the year shown referred to the map’s date of creation or its publication date. This caused hesitation in evaluating the map's relevance. It is recommended to be explicit about what the date represents. Consider adding tooltips or a label such as "Date Depicted" or “Date Published."
    • “I guess one thing I'm confused about is that it says the publishing year but... maybe this map is from the thirties, but it wasn't published until the forties.” — Undergraduate Student in Economics.
  • Date Filtering Single-value date filters were limiting. Participants expressed interest in filtering results by a range of dates. It is recommended to implement a date range facet to support more flexible and meaningful exploration of time-sensitive materials.

Usability Analysis of the Big Ten Academic Alliance Geoportal: Findings and Recommendations for Improvement of the User Experience (2017)

https://journal.code4lib.org/articles/12932

  • Authors: Mara Blake (Johns Hopkins), Karen Majewicz (Univ. of Minnesota), Amanda Tickner (Michigan State), Jason Lam (Univ. of Michigan)
  • Methods: Usability Testing (16 in-person/moderated); Heuristic Evaluation; Web Analytics
  • Platform: GeoBlacklight / BTAA Geoportal

Aspects Evaluated

  • Search strategies: relative usage of main search box, facets, and map inset
  • Heuristic evaluation categories: features/functionality; homepage; navigation; search; content/text; help; performance

Key Findings / Recommendations

  • Keyword Search. Most problematic feature as searches often did not return expected results (esp. after typing a place name or phrase that did not exist in any metadata records). Web analytics confirmed the vast majority of text searches were place name keywords. Normalize element values for keywords, place names, and authors/publishers. Add synonym files to Solr to facilitate discovery.
  • Failed Search / No Results Page. Add more info/suggestions for what else to try.
  • Facet Order. The first three facets were expanded by default; users rarely clicked on any others. They found Institution and Collection facet labels confusing. Emphasize the most useful facets (Place, Data type, Subject, and Time period) by putting them first, and set them to display as expanded by default.
  • Year Facet. Users found it difficult to use when sorted by numeric count (default) instead of chronological order. Recommended a Time Period facet instead.
  • Subject Facet. Users found the facet helpful for exploring content, however values were not normalized for capitalization/consistency so similar terms appeared multiple times.
  • Applied Constraints. Many users did not notice that they can combine facets to narrow down the items returned; also did not realize that as they clicked on facets, the system "remembered" selections, and this behavior could cause subsequent searches to fail. Many did not notice the nav options for Back to search & Start over. Make options in this section clearer.
  • Results per Page. Many users expressed desire for more than ten results per page; set default to twenty.
  • Map Inset. Users did not recognize it could be used as a search tool; low usage confirmed via web analytics.
  • Item Page. Have a consistent download button/or substitute for download button. Add text that indicates what is being shown in the preview box. Include thumbnails for scanned maps. Add functionality for supporting documentation (attribute table, codebook, data dictionary, etc.).
  • Search Strategies. Level of GIS experience appeared to have little to no effect on item discovery. Instead, success could be predicted based upon the level of experience in research, libraries, or even online shopping.
Clone this wiki locally