Skip to content

Export data

Britta Gustafson edited this page Aug 14, 2024 · 2 revisions

This page is mainly intended for team members who want to learn about ways to work with our content, although it could also be of interest to anyone reusing the codebase for a new project.

JSON API

Go to /api/swagger/ to see our API documentation. This tells you the API endpoints that our developers use to pull information into our frontend website. The endpoints provide data in JSON format.

APIs for retrieving non-sensitive information (such as regulation text and metadata about Federal Register documents) do not require authentication/authorization.

APIs for modifying information or retrieving internal information (such as uploaded files) are controlled by Django authorization management, which is hooked to CMS SSO authentication.

XML sitemap

Go to /sitemap.xml to see a list of regulation pages in the site. We don't use this for anything right now. Note: As of February 2024, this only includes regulation subparts from one title, but we support multiple titles now, so this is missing some info.

RSS (retired)

We previously had an RSS feed with all of our links to public documents, including Federal Register documents, at /latest/feed/.

We built this to tell Search.gov about the public content we wanted them to automatically index, using their supplemental content feature. We stopped using Search.gov, and our user audience doesn't use RSS feeds, so when we refactored some related features, we removed this instead of updating it.

Export to CSV (retired)

We previously had a /resources/ page with a Download CSV button. The goal was to enable users to do their own advanced searching, sorting, filtering, and annotation. We retired the resources page but would like to rebuild this button later.

How it worked

If you clicked that button, it downloaded metadata about all supplemental content and Federal Register documents into a CSV, one row for each document. Then you could open that CSV in Excel or Google Sheets. (Although special characters, such as em dashes and curly quotes, got displayed in a corrupted way in the CSV.)

Examples of things you could do

  • In Excel, go to "advanced search" and search for a file extension, like .doc (which catches both .doc and .docx), to find out how many Word doc links we have.
  • Sort the URL column A-Z, then scroll up and down to get a quick sense of how many items we have from websites other than Medicaid.gov.
  • Turn on filters, select the Name field, and uncheck "null". Click the column header and look at the count (in the bottom status bar) to find out how many items we have with names.

Import data (retired)

We built Import JSON and Import CSV options in our admin panel for importing supplemental content metadata in bulk, but they were rarely used and hard to maintain, so we removed them.

Overview

Data

Features

Decisions

User research

Usability studies

Design

Development

Clone this wiki locally