Steps to reproduce:
1. Use an external script to feed many (> 10'000) recording schedules
The first few will be added to the system quickly, but the more recordings are being added the worse it gets, leading to a situation where there are minutes between two recordings being added.
Right now, it seems like the right approach would be to not send all data to the Matterhorn scheduler.
The problem is the link between the scheduling database and the workflow service (and the service registry), which currently relies on the identifiers being the same (scheduled event id, workflow id, start_workflow job id).
As long as the workflow service and the service registry don't allow to specify the id when creating workflows and jobs, we can't start a workflow after the fact.
A different approach would be to add a new column to the scheduled events table that references the workflow.