Catalog Postgres Exploration Effort

On May 22nd at 1500 UTC several OpenverseOpenverse Openverse is a search engine for openly-licensed media, including photos, audio, and video. Openverse is also the name for the collection of related code repositories that make up the project. find Openverse at https://openverse.org. members will be gathering for a meeting to discuss and engage in some exploratory operations on a copy of the catalog database to get a better sense of how Postgres performs given our current scale & purpose. This was spurred on by this discussion on the decoupling popularity calculations IP; I personally had a feeling regarding how certain Postgres changes would affect our query time, performance, etc., but based on the level of unpredictable behavior we’ve seen with the matview refresh, few of us feel confident about being able to predict outcomes with Postgres. We thought it might be useful to spend some time performing some exploratory operations & optimizations with Postgres!

Some things I think might be useful to try:

  • Reducing the width of the coreCore Core is the set of software required to run WordPress. The Core Development Team builds WordPress. media table
  • Performing some normalization steps (e.g. pulling provider out into a separate table)
  • Adding an index on the provider column

Some ways that we could assess the performance changes of the above:

  • Have a sample set of data we use for upsert tests, one small (~1k rows) and another large (~100k rows)
  • Develop some sample updates we might perform (e.g. the thumbnail column update we tried to perform for Flickr which was projected to take an extremely long time)
  • A matview refresh or recreation

Our plan will be to create a new catalog DB instance based on a snapshot, and perform all the above against that instance. @olgabulat and @stacimc have confirmed to join the meeting, and Staci and I will continue to work on things together throughout the day while we’re co-located.

If folks have any suggestions for other operations or queries for us to test, we’d love to hear them! Hopefully we can come away from this with a better sense of “what will happen” when we change the catalog schema in certain ways 😁

#catalog #postgres

Preparing the next migration of the Catalog

As we have fixed and reactivated data ingestion via more provider scripts, some shortcomings in the data model have been uncovered. Today I went through the backlog of Catalog’s issues and noticed we had accumulated a small pile of problems that seems need to be addressed primarily at the database level.

For some, the discussion has already begun in other posts, but I wanted to gather everything I have noticed in one place, so we can discuss if we do all the changes together, whether is better to do it in more manageable chunks or find alternative solutions. These are disruptive and breaking changes given they alter the structure of the TSV files (a key part of the ingestion process) and need to be implemented carefully, so are usually postponed until the last resort and when is confirmed that they are necessary.

Columns to modify or add

  • filesize : change type to bigint
  • duration : change type to bigint
  • mature : add a boolean column
  • description : add a text or varchar(n) column

Issues

The need for these changes comes from this list of issues that could be concentrated into a milestone:

Also, consider the following:

Previous related work

We performed the addition of columns in the past, you can see the PR for including the category in the image table, and we plan to add more media types next year, so this could serve as an opportunity to refine this process and make it simpler.

Finally, the call is to start thinking about this. There is no hurry currently, but the need is becoming more and more evident. What do you think? Do you see it as viable? When could we start this endeavor?

#data-normalization, #postgres

Handling very large (>2GB) files

As we gear up to tackle the Provider DAG Stability milestone, @stacimc and I were looking over the existing issues and came across this one: Filesize exceeds postgres integer column maximum size. The specific details can be seen in the linked issue, but the summary is that we were trying to ingest a Wikimedia record which referenced an audio clip that was over 8 hours in length. The filesize for this record exceeded the maximum value size allowed for a Postgres integer column and broke the ingestion.

After discussing this, we came up with a few options:

  1. Reject any records that exceed this 2GB size limit at the MediaStore level. It seems unlikely that users would want audio records this large, especially considering that we don’t make any distinction on length beyond “> 10 minutes” in the search filters.
  2. Set values greater than this column maximum to NULL. Records that exceed that size will not have filesize information stored, but all other information will be available.
  3. Alter the column to use a Postgres bigint type. This will require migrations on both the catalog and the APIAPI An API or Application Programming Interface is a software intermediary that allows programs to interact with each other and share data in limited, clearly defined ways., and could be extremely cumbersome.

Of the 3 options above, Staci and I thought that the first would be most appropriate and easiest to execute. It also wouldn’t preclude us from accepting larger file sizes in the future, should we wish to take a different approach to including them in the catalog.

What do folks think, does that seem like a suitable next step? Are there other alternatives we haven’t considered that might be worth pursuing?

#data-cleaning, #postgres

Storing provider supplied thumbnails

Note: This is adapted from a conversation that happened on the Make WP Slack (see the link for more context).

Our current situation regarding thumbnails

Thumbnails were collected by providers and put in the thumbnail column in the catalog. We have data there now, and the column has not been dropped. Since this column also exists in the API, the data from it (for both audio and image) gets copied into the thumbnail column there as well. However, only the Audio model uses the thumbnail column, the Image model ignores it and tries to create a scaled down thumbnail using the primary image URL. The MediaStore class in the provider has been modified so that no new thumbnail URLs can be added to the thumbnail column, but they can be added in the metadata under meta_data["thumbnail_url"].

We have run into scenarios recently where our own attempts to create thumbnails from the full-size image URLs have caused consistent timeouts and prevent the records from being shown in search results. SMK is a recent example of this: (1) (2). It may be advantageous to use the provider’s pre-computed thumbnails in these cases. Crucially, not all providers supply or are expected to supply thumbnails. As such, it seems like we may want to reduce the width of our tables by removing this frequently-empty column.

Proposal

For image: we first go through and copy all of the existing thumbnail values in the catalog into meta_data["thumbnail_url"]. Then we perform a new-table-without-column-swap and remove the thumbnail column from from the table. This will help conserve space and reduce the width of the table. Then we update the image_view materialized view to populate a thumbnail column for the APIAPI An API or Application Programming Interface is a software intermediary that allows programs to interact with each other and share data in limited, clearly defined ways. using the meta_data dictionary. An example of this is in the Audio view’s audio_set_foreign_identifier . This will allow us to keep the thumbnail column in the API but reduce space on the catalog, since a migrationMigration Moving the code, database and media files for a website site from one server to another. Most typically done when changing hosting companies. of that size on the API would likely be untenable at the moment. We’ll then make two changes to the logic on both ends:

  1. Add thumbnail_url handling logic in the ImageStore which will put that value in the meta_data dictionary automatically
  2. Change the Image model’s logic on the API to first try and use image.thumbnail (which has been copied from meta_data["thumbnail_url"] during the data refresh), then try image.url when creating the scaled down image

We could potentially do the same for Audio, it looks like about half of the records do not have thumbnails:

deploy@localhost:openledger> select count(*) from audio where thumbnail is null;
+--------+
| count  |
|--------|
| 427124 |
+--------+
SELECT 1
Time: 4.354s (4 seconds), executed in: 4.328s (4 seconds)
deploy@localhost:openledger> select count(*) from audio;
+--------+
| count  |
|--------|
| 949384 |
+--------+
SELECT 1
Time: 3.899s (3 seconds), executed in: 3.887s (3 seconds)

What do folks think about this approach? I think any effort to reduce the width of our DB tables is an important one, as it will make migrations & data management easier down the line.

#data-normalization, #postgres, #thumbnails