Symptom
*Some of the investigation items below require database querying and modification. This should only be carried out by database administrators or IPV Support. Please contact your local DB administrator for help.
When you see the following error: "Error Getting Processes - Error on executing DbCommand"

This error message typically appears when the database process table is so full that retrieving all the entries times out. The above issue was raised when said table contained approximately 45,000 entries, however it is unclear what the threshold for this is as of yet.
Resolution
To check this:
1) Open the Process Engine database and run the following query to check how many entries there are in the process table:
SELECT count(*) FROM processengine.process;
If the result is higher than 45k, it is likely that this error is caused by not being able to retrieve all the results in a timely manner.
2) The second check you can make is running the simple database query:
SELECT * FROM processengine.process;
with no limits set. If this request times out, then the issue is with the amount of processes in the table.
3) How to clear this number down?
The best way to do this is by checking the service config for Process Engine and editing the line:
<PersistenceConfiguration maxConcurrentParentProcesses="50" purgeTimeSpan="1.00:00:00" failedPurgeTimeSpan="30.00:00:00"/>
Where the value purgeTimeSpan in this example is set to 1 day.
This will (without restarting the service) start removing completed processes from the database that are more than a day old.
You can change this value back to a higher one once the cleanup has completed.
To monitor it working, you can either run the first DB query to check the count, or enable debugging on logs and watch the purge occur (If you are unsure how to carry out this action, please feel free to contact IPV Support).
NOTE: This automatic purge can take up to an hour to complete.
Alternative check and solution to apply:
While in the Process Engine database, you can also look for tracking data that has been running in a persistent job, or very long job:
Open the Process Engine database and run the query:
SELECT COUNT(process_id),process_id FROM processengine.user_record GROUP BY process_id;
This will return all the records, and sort them into how many records are against each process.
At this point you want to find the process_id that has the most records against it - often this can easily reach 30k+. Make a note of the id number.
It is completely safe to remove these, and to do so run the following query:
DELETE FROM processengine.user_record WHERE process_id = ****;
NOTE: where **** is the process_id number you noted down.