Atlassian uses cookies to improve your browsing experience, perform analytics and research, and conduct advertising. Accept all cookies to indicate that you agree to our use of cookies on your device. Atlassian cookies and tracking notice, (opens new window)
Confluence
/
Backup file structure
Updated Jan 04, 2024

    Backup file structure

    Not supported

    Manually modifying backup files is not supported. However, we’re happy to provide details for interoperability.

    Backups that are exported through the UI are in CSV format. As said in the parent page Import/export of Requirement Yogi data to a Confluence or Jira instance, backups can be used to export the entire Requirement Yogi data of a space and reimport it to another space or instance.

    Example

    BEING,com.playsql.requirementyogijira,3.7-SNAPSHOT,3.5.0 DBBackupItem,METADATA,null,TIO,,2024-01-04 20:59:55 DBBackupItem,DBRILAUDITTRAILITEM,2,,10100,TIO-1,admin,,,2024-01-04 19:20:18.015,"[{""relationship"":""implements"",""key"":""LKJ001"",""spaceKey"":""SF"",""title"":""LKJ001"",""url"":""/requirements/SF/LKJ001""},{""relationship"":""implements"",""key"":""LKJ003"",""spaceKey"":""SF"",""title"":""LKJ003"",""url"":""/requirements/SF/LKJ003""},{""relationship"":""implements"",""key"":""OOO-001"",""spaceKey"":""SF"",""title"":""OOO-001"",""url"":""/requirements/SF/OOO-001""}]",true DBBackupItem,DBRILAUDITTRAILITEM,3,,10100,TIO-1,admin,,,2024-01-04 19:20:19.199,"[{""relationship"":""implements"",""key"":""LKJ001"",""spaceKey"":""SF"",""title"":""LKJ001"",""url"":""/requirements/SF/LKJ001""},{""relationship"":""implements"",""key"":""LKJ003"",""spaceKey"":""SF"",""title"":""LKJ003"",""url"":""/requirements/SF/LKJ003""},{""relationship"":""implements"",""key"":""OOO-001"",""spaceKey"":""SF"",""title"":""OOO-001"",""url"":""/requirements/SF/OOO-001""},{""relationship"":""implements"",""key"":""RRR-001"",""spaceKey"":""SF"",""title"":""RRR-001"",""url"":""/requirements/SF/RRR-001""},{""relationship"":""implements"",""key"":""RRR-003"",""spaceKey"":""SF"",""title"":""RRR-003"",""url"":""/requirements/SF/RRR-003""}]",true DBBackupItem,DBREMOTEREQUIREMENT,1,,76888b66-b3e3-3eb2-bbdc-34d51bd6c884,SF,LKJ001,,false,2024-01-04 19:20:08.336,false,, DBBackupItem,DBREMOTEREQUIREMENT,5,,76888b66-b3e3-3eb2-bbdc-34d51bd6c884,SF,RRR-003,,false,2024-01-04 19:20:12.112,false,, DBBackupItem,DBREMOTEREQUIREMENT,6,,76888b66-b3e3-3eb2-bbdc-34d51bd6c884,SF,LKJ003,,false,2024-01-04 19:20:16.729,false,, DBBackupItem,DBREMOTEREQUIREMENT,7,,76888b66-b3e3-3eb2-bbdc-34d51bd6c884,SF,OOO-001,,false,,false,, DBBackupItem,DBREMOTEREQUIREMENT,8,,76888b66-b3e3-3eb2-bbdc-34d51bd6c884,SF,RRR-001,,false,2024-01-04 19:20:18.519,false,, DBBackupItem,DBISSUELINK,6,,10100,TIO-1,implements,1 DBBackupItem,DBISSUELINK,7,,10100,TIO-1,implements,6 DBBackupItem,DBISSUELINK,8,,10100,TIO-1,implements,7 DBBackupItem,DBISSUELINK,9,,10100,TIO-1,implements,8 DBBackupItem,DBISSUELINK,10,,10100,TIO-1,implements,5 DBBackupMapping,ISSUE,10100,,,,,,,,Task 1,,,,TIO-1 DBBackupMapping,USER,admin,,,,,,,,,,admin,, DBBackupMapping,APPLINK,76888b66-b3e3-3eb2-bbdc-34d51bd6c884,,,,,,,,Confluence,http://confluence.local:1991/confluence,,, DBBackupMapping,SPACE,SF,76888b66-b3e3-3eb2-bbdc-34d51bd6c884,,,,,,,,,,, END,End of the file

    Structure of the files

    It’s a CSV export built wih Apache’s commons-csv, with the following configuration:

    FileOutputStream fileOutputStream = new FileOutputStream(backupFilePath.toFile()); OutputStreamWriter fileWriter = new OutputStreamWriter(fileOutputStream, StandardCharsets.UTF_8); CSVPrinter printer = new CSVPrinter(fileWriter, CSVFormat.DEFAULT)

    The CSVFormat.DEFAULT has the Javadoc comment:

    Standard Comma Separated Value format, as for RFC4180 but allowing empty lines. The CSVFormat.Builder settings are: setDelimiter(',') setQuote('"') setRecordSeparator("\r\n") setIgnoreEmptyLines(true) setDuplicateHeaderMode(DuplicateHeaderMode.ALLOW_ALL)

    Beginning of files

    BEING,com.playsql.requirementyogijira,3.7-SNAPSHOT,3.5.0
    • BEING is hard-coded (Don’t ask me, I must have been tired that day),

    • The plugin key,

    • The plugin version,

    • The most ancient compatible plugin version

    DBBackupItem,METADATA,null,TIO,,2024-01-04 20:59:55
    • Name of the table (Those records are kept in the table DBBackupItem),

    • METADATA is hard-coded

    • null is the hard-coded,

    • The scope key.

      • In case of a Confluence export, it’s the space key (before exporting);

      • In Jira it’s the project key (before exporting).

      • If all data was exported, null,

      • This field has the effect that all data in Confluence or Jira within this scope will be deleted.

      • This field is mapped on the target instance. If you export the space ABC, and you map ABC → DEF on the target instance, then DEF will be deleted and reimported.

      • If you need to ignore the scope and not delete any data, then one idea is to set this value to a non-existing space/project. However, in Jira, links will be imported in duplicate if they already exist, so you must clean up the data yourself before an import.

    • The author key,

    • The date of the export.

    First half of files

    Note that this CSV data is directly loaded without transformation into the backup tables (DBBackupItem and DBBackupMapping), then the sysadmin performs the mappings which updates data in DBBackupMapping, then only the import is performed into such tables as DBREMOTEREQUIREMENT.

    DBBackupItem,DBREMOTEREQUIREMENT,8,,76888b66-b3e3-3eb2-bbdc-34d51bd6c884,SF,RRR-001,,false,2024-01-04 19:20:18.519,false,,
    • Name of the table (plain data is kept in DBBackupItem, mappings are in DBBackupMapping),

    • Name of the target table,

    • Old ID: ID in the old system. In this example, it’s the ID of DBRemoteRequirement. During reimport, the creation of the record will generate a fresh ID, which is written in DBBackupItem.NEWID, just to keep an archive of what has been migrated (and eventually perform mappings).

    • Then all columns of the table. The order is specified by the annotation @ExportMapping(order = 2) on each column in the code.

    The detail of the structure of each table is visible on those two pages:

    • Database schema for Confluence backups

    • Database schema for Jira backups

    Second half of the files

    It contains the “mappings”, i.e. the list of Confluence or Jira entities that we rely on. For example, it could contain the details of a space. During reimport, mappings are loaded in the table DBBackupMapping, and the system administrator can remap a space key to another; During the reimport, every time we reference this space, we’ll take the new key instead of the old one.

    DBBackupMapping,APPLINK,76888b66-b3e3-3eb2-bbdc-34d51bd6c884,,,,,,,,Confluence,http://confluence.local:1991/confluence,,,
    • Name of the table,

    • Name of the target table,

    • Key of the record. It is not exactly the primary key, since the full primary key is the key + dependencies. For example, if we save a page version, then the key is the versionId, and the dependencies are type=PAGE, app=page, documentId=271532.

    • Then various “FYI” fields which can help the sysadmin infer or guess the new ID of the record:

      • The space key,

      • The page type,

      • The page id,

      • The user key,

      • The applink,

      • The page version,

      • The title (of the page, applink, user, etc.),

      • The URL (i.e. of the applink),

      • The username,

      • The extension document id,

      • The issue key

    Footer of the files

    END,End of the file

    It’s hard-coded.

    Teams
    , (opens new window)

    Requirement Yogi (Data Center)
    Results will update as you type.
    • Getting Started - Tutorials
    • Administrator's guide
      • Configuration
      • Maintenance guide
      • Limitations
      • Migration to the Cloud
      • Requirement Yogi Cloud vs Data Center
      • Backup, import and export of Requirement Yogi data across Confluence and Jira instances
        • What are the steps of export / re-import?
        • How to carefully perform the mappings for the backup import/export step?
        • Checking that the Jira mappings are correct
        • Cleaning up links to non-existing Jira issues
        • Backup file structure
        • Database schema for Confluence backups
        • Database schema for Jira backups
        • Deleting the results of an incorrect import in Jira
      • Database schema
      • Changing Applinks
      • Performance
      • Copying, Splitting and Moving pages
      • Choosing the indexing engine
      • Notifications
      • Data Center SLA / Escalation process
      • History - Administrator's guide
    • Features
    • Requirement Yogi for Jira
    • RY Testing and Compliance
    • Integrations
    • APIs
    • Release notes
    • Archives (Legacy Features)
      Calendars
    You‘re viewing this with anonymous access, so some content might be blocked.
    {"serverDuration": 10, "requestCorrelationId": "ec1123c2d8434d3c98987a35253a15e7"}