A global sportswear leader approached us recently with a huge archive that contained millions of records. We’re talking about decades of products, museum exhibits, and valuable creative history. This archive was not simply a storage system. It was a living resource that supported storytellers, warehouse teams, and designers around the world.
But the archive was hard to manage.
When “it still works” isn’t good enough
For years, the archive lived on a legacy system that the team was only able to access by logging into a remote desktop server. It required specialized knowledge to operate and patience. Routine updates started to feel risky.
Over time, the team developed creative workarounds to keep things moving. But as the archive (and the team managing it) grew, the strain began to show. The more the archive expanded, the more responsibility fell on that small group, creating a bottleneck that slowed the pace at which new stories could enter the system.
So management decided to find a new collections management software and, after a few false starts, chose MuseumPlus.
The selection of MuseumPlus as the archive’s new collections management platform was a good first step in making everyone’s life easier. It made data cleaner simply by making it easier to edit. Workflows expanded quickly, allowing other team members to join in on the effort, finally. Language in the archive became standardized, a much-needed improvement on decades of human-entered data.
But a new problem presented itself. Many of the integrations that had been built around the previous system contained workarounds and quick-fix solutions to large-scale data problems. By making the data clearer to the user, they had exposed a fragile connection point between tags, categories, and the general flow of data to their website. Even worse, the tool the previous team built on was discontinued by the corporation.
Choosing reinforcement over reinvention
When looking at a large problem like: should we completely reintegrate the data pipeline with all new tables? It’s tempting to throw the old conventions out and start fresh. There’s a new collections management software, and we can’t use the old tool anyway. So what value is there in keeping any of the old code alive?
The foundation was solid. It’s as simple as that. The previous team had spent years working, improving, and sculpting the connections between the archive’s data and how it was displayed on the website. The search mechanic alone was impressive enough to keep the data structure very familiar after migrating to the new collections manager – it elegantly handled tags and categories, product IDs, and special naming that the corporation had in place for years.
Reinforcement became the goal – let’s approach this by honoring the previous system with modern tooling to help it scale to the wider archive staff and purpose.
Designing a sync built for scale
Before making any technical decisions, we worked closely with the client’s internal teams and their MuseumPlus vendor to understand how the platform structured data and how various teams relied on it in their daily work. Mapping that landscape was essential. At this scale, assumptions are expensive.
Instead of stretching the existing integration one step further, we built a purpose-designed connection using Hatchet, a workflow engine that provides retries, structured logging, monitoring, and guardrails by default. This gave us a framework for syncing millions of records between MuseumPlus and the CMS in a way that was observable, predictable, and resilient to change.
We introduced validation steps and fallback rules to catch errors early. Structured logging was implemented not just for traceability, but to power alerts and automated retries that could resolve many issues before they reached the website’s highly curated displays. This reduced manual cleanup, minimized data drift, and protected the archive from silent failures.
In addition to record syncing, we created workflows to manage digital assets. This helped to ensure that the original files, thumbnails, and related media stored in AWS S3 were consistently synced and displayed across the web experience. It may sound simple, but with nearly 65 years of history in the database, it’s not always easy to track down one misaligned image! So it had to be right.
What changed
With Hatchet in place, staff began to see their changes reflected on the website on a dependable hourly cadence. The system became easy to manage – no more logging into a remote desktop to make a small spelling change to an item. The team could add or remove items themselves, in their own browsers, whenever they wanted.
The weight of risk was reduced. Specialists started to train the larger team to take on the burden of keeping items in the archive updated and clean. The link between digital and physical shrunk. Workarounds reduced.
The team gained confidence in the systems.
The larger pattern
This project reinforced a principle we often see: not every mature system needs to be replaced to move forward.
In this case, the archive’s data retained enormous value. The opportunity was in modernizing the connection around it. By replacing a discontinued and fragile sync tool with an integration built for scale and visibility, the team preserved decades of curated history while removing the friction that had slowed their progress.
Modernization does not always mean reinvention. Sometimes it means strengthening the link so that what already works can continue evolving.