Archived Data (SARA)

It is possible to ingest data that has been archived with SAP Archive Administration (SARA). You can extract archived data using the SNP Glue™ transparentization tool and ingest it into Datafridge with the archive run. Here, the archive run value is added to data that was transparentized so that it can be easily identified later in the target storage table.

 

Prerequisites for the Extraction

The following prerequisites exist for the extraction of archived data:

  1. Complete the archiving sessions for objects that you want to execute in the extraction.

  2. Set up transparent binary storage in Storage Management.

 

Proceed as follows to set up the transparent binary storage:

  1. Start transaction /DVD/SM_SETUP.

  2. In the Storage ID field, enter ZT_TRFILE.

  3. In the Storage type field, enter SM_TRS_BIN.

  4. In the Description field, enter a suitable description, e.g. Transparent bin to CSV files.

  5. In the Binary storage ID field, enter ZT_FILE.

  6. In the File parameters area, select the following options:

  • Use CSV files

  • Include header

  • Put values into quotes

  1. In the Delimiter type field, enter a semicolon.

  2. Click Save.

 

Continue with the setup by using the file storage as a binary storage:

  1. In the Storage ID field, enter ZT_FILE.

  2. In the Storage type field, enter BINFILE.

  3. In the Descriptionfield, enter a suitable description, e.g.File.

  4. In the File path field, enter a suitable file path.

  5. Click Save.

The setup is complete.

 

Extraction

Proceed as follows to perform the extraction:

  1. Start transaction /DVD/SARA.

  2. Create a new run ID.

  3. In the Storage ID field, enter ZT_TRFILE.

  4. In the Package (for generated tables) field, enter $TMP.

  5. In the Write package size (rows) field, enter 10000000.

  6. In the Decommissioning withDatafridge area, select the following options:

  • Exclude run ID field from data

  • Translate values and currencies

  • Calculate hashes

  1. Click Save.

  2. Select the archiving object and click Save.

  3. In the Archiving session number field(s), select the archiving documents or archiving sessions and click Save.

  4. Create the table mapping and click Save.

  5. Click Set prefix to define a prefix for all the target tables.

  6. Execute the following steps:

  • Generate table check tasks

  • Check consistency of tables

  • Generate migration tasks

  • Create tables

  • Import data

  • Generate table pool file

  • Export calculated hashes

  • Export settings for object list generation

  • Reorganize CSV files. This task will rename and move extracted files to a specified directory.

The extraction is complete.

 

Prerequisites for the Ingestion
  • The target tables must already exist in the logical system.

 

Ingestion

Proceed as follows to execute the ingestion:

  1. Start transaction /DVD/RMX_AR and create a new archive run.

  2. Execute the step Archived data ingestion > RMX setup > Archive Ingestion Setup Wizard.

  3. Select a logical system and click Continue.

  4. Click Import files and select a table pool file that was generated during the extraction process. This will automatically create a mapping between the extracted files and the already existing tables in Datafridge.

  5. Check the entries and click Save.

  6. Execute all the steps from the Load of metadata subfolder:

  • Load settings for object list generation

  • Generate object list load tasks

  • Run object list load tasks

  • Load hashes for validation

  1. Execute the step Generate extractors.

  2. Execute the step Generate data load tasks.

  3. Execute the step Run data load tasks.

The ingestion is complete.