...
Basically, once you have 50.000 requirements in the database, expect 20ms per requirement on the page when you save a page, and 20ms per requirement on view. This is not a commitment, as it depends on the machine, the set up, the configuration, the latency of the DB, the version of Confluence and Requirement Yogi.
Performance change in 3.0
- When saving a page, we've gone from 3500 requests to 1356 (Don't assume we just had to switch a flag, it instead required heavy optimizations for each request).
Performance change in 2.0
- We've deeply modified the indexing algorithm in 2.0, because we are now importing Excel files.
- This algo generally reads first and checks whether it needs to change the data, instead of deleting all and writing blindly.
- We did not notice much change in speed. Some speeds are improved, other are worse, depending on the number of modified requirements and properties.
Performance improvements in v1.11.5
- For pages with no requirements, we've improved the speed by skipping our indexation:
- We skip the parsing if the storage format hasn't changed,
- We skip the parsing if the rendered format hasn't changed, in case it contains an "Include" or "Scaffolding" macro.
- We skip the parsing if there is no requirement in the old or new version.
- For pages with requirements:
- We've added indexes on database columns. On our instance we get 5x faster results when saving a page, but we may be in special circumstances.
- When we index a page (=when a user saves a page), we've batched the lookups of requirements, so we don't do 1 database request for each requirement on the page. On our instance, we get again 4x faster times depending on database latency (most LANs are on 1ms latency, but we've measured with 5ms).
- We'd be thrilled if you have 20x better response times than in 1.11.4, but we'll check back with customers before asserting that.
Details in 3.0
Bottlenecks
We recommend:
- 150 requirements per page, with a maximum of 400 (Confluence doesn't support infinite pages anyway).
- The default Global Limit is 12,000 and it should be the optimum for most people. If administrators notice that the Conflue
- The size limit for baselines depends on the Global Limit and it is 12,000 requirements by default.
- For administrators, the process to monitor is baseline creation. If baselines take more than 30s to create, they load very important amounts of data in memory, such as requirements and the associated pages, and hold on to the transaction until the baseline is created. In RY-965, we will ensure we communicate it to users in the UI, and in RY-966 we will make it a background task chopped in smaller transactions.
Performance in 3.0
Changes in 3.0:
- When saving a page, we've gone from 3500 requests to 1356 (Don't assume we just had to switch a flag, it instead required heavy optimizations for each request).
- We have reworked the data model around dependencies, it shouldn't change the performance very much.
We have evaluated on a personal machine with the following setup:
...
We have simply instrumented the code and created massive pages:
Event | Time (in addition to Confluence's algorithm). For ~400 requirements, ~525Kb text per page, 2ms network latency. No Jira connection. | Time 1ms latency, |
---|---|---|
Page creation |
|
|
Submission of excerpts (This operation is in the background, the user doesn't wait for this). |
|
|
Tested for Requirement Yogi 3.0.0 / C7.4 with 2ms and 1ms network latency, |
...
Performance in 2.6.9
Same conditions:
Event | Time (in addition to Confluence's algorithm). For ~400 requirements, ~525Kb text per page, 2ms network latency. No Jira connection. | Time 1ms latency, |
---|---|---|
Page creation |
|
|
Submission of excerpts (This operation is in the background, the user doesn't wait for this). |
|
|
Tested for Requirement Yogi 2.6.9 / C7.4 with 2ms and 1ms network latency, in addition to the database latency, already loaded with 80.000 requirements. |
...
Performance in 2.0
Same conditions:
Event | Time (in addition to Confluence's algorithm). For ~400 requirements, ~525Kb text per page, 2ms network latency. No Jira connection. | Time 1ms latency, |
---|---|---|
Page creation |
|
|
Page edition |
|
|
Submission of excerpts (This operation is in the background, the user doesn't wait for this). |
|
New result:
|
Tested for Requirement Yogi 2.0.0 with 2ms and 1ms network latency, in addition to the database latency, already loaded with 80.000 requirements. |
Performance changes in 2.0:
- We've deeply modified the indexing algorithm in 2.0, because we are now importing Excel files.
- This algo generally reads first and checks whether it needs to change the data, instead of deleting all and writing blindly.
- We did not notice much change in speed. Some speeds are improved, other are worse, depending on the number of modified requirements and properties.
Performance in 1.11.5
- For pages with no requirements, we've improved the speed by skipping our indexation:
- We skip the parsing if the storage format hasn't changed,
- We skip the parsing if the rendered format hasn't changed, in case it contains an "Include" or "Scaffolding" macro.
- We skip the parsing if there is no requirement in the old or new version.
- For pages with requirements:
- We've added indexes on database columns. On our instance we get 5x faster results when saving a page, but we may be in special circumstances.
- When we index a page (=when a user saves a page), we've batched the lookups of requirements, so we don't do 1 database request for each requirement on the page. On our instance, we get again 4x faster times depending on database latency (most LANs are on 1ms latency, but we've measured with 5ms).
- We'd be thrilled if you have 20x better response times than in 1.11.4, but we'll check back with customers before asserting that.
Errors in the logs?
If your server meets problems like "OutOfMemoryException", "Java Heap Space" or "SiteMesh" exceptions, it could be related to building the Traceability matrix. One important thing to note is that the Requirement Yogi add-on may not be mentioned in those exceptions. If you are in this situation:
...