22 September, 2020
This replication method is only recommended for customers that have been invited to use this technology during the RTC of the SAP HANA product delivery.
If you are not part of this RTC group, SAP recommends using Trigger – Based Data Replication using SAP IT
( Landscape Transformation) Replication server because of the rich feature set offered by this replication technology
Replication Process In Detail
The initial load can be executed while the source system is active, as described briefly below
The Load controller initiates the initial load by calling the SAP R3 load component in the source system.
This is a special version of R3 Load available as a patch from the SAP service marketplace.
The R3 Load on the source system exports the data for selected tables in the source system database and directly transfers this data via sockets to the R3 Load component in SAP HANA without any intermediate files.
The R3 Load on the target system imports the data into the SAP HANA database.
The login authentication between the source system and the target system is handled by the SAP Host Agent, which is usually part of the source system.
In parallel to the initial load, the Sybase Replication Agent in the source system is started and detects any data changes occur while the initial load is already running to cover every single change.
This detection is performed by reading the logs for committed transactions of the source system database.
The Replication Agent uses the table metadata from the database to connect the raw log information with the existing table names.
In addition, the Replication Agent transfers all relevant raw log information vie TCP/IP Connection to the Sybase Replication Server on the SAP HANA side.
The Replication Server creates SQL statements from the raw Log information received and sends these statements to the Sybase Enterprise Connect Data Access (LCDA)
The ECDA Connects to the SAP HANA database via Open Database Connectivity (ODBC) driver and replicates the data changes from the source database by executing the SQL statements in the SAP HANA Database.
The multi-version concurrency control (MVCC) of the SAP HANA database prevents locks.
The continuous delta replication captures the ongoing data changes in the source systems in real-time once the initial load and the simultaneous delta replication have been completed.
All further data changes are captured and continuously replicated from the source system to SAP HANA using the same process as the simultaneous delta replication described above.
This replication method requires the following components.
Controls the entire replication process by triggering the initial load and coordinating the delta replication.
Sybase Replication Agent
Performs the log mining on the source database and relays all relevant information to the replication server.
Sybase Enterprise Connect Data Access (ECDA)
Connects to the target SAP HANA database via ODBC.
Sybase Replication Server
The main Component to accept data from the replication Agent, distribute and apply this data to the target database using ECDA/ODBC for connectivity
SAP Host Agent
SAP host agent handles the login authentication between the source system and the target system.
Inclined to build a profession as SAP HANA Developer? Then here is the blog post on, explore SAP HANA Training
The SAP HANA – Optimized DataStore object is a standard DataStore object that is optimized for use with the SAP HANA database.
By using the SAP HANA database
By using SAP HANA – optimized Data Store objects, you can achieve significant performance gains when activating requests.
The Changelog of the SAP HANA – Optimized DataStore Object is displayed as a table on the BW system.
However, this table doesn’t save any data which helps to save memory space.
When the changelog is accessed, the data content is calculated using a calculation view.
Data is read from the history table for the temporal table of active data in the SAP HANA database.
The table for active data is a temporary table that consists of three components
Data activities are started on the BW system and exacted in SAP HANA
No data is transferred to the application server during activation
Difference to a Normal Standard Datastore Object
The SAP HANA – Optimized Data Store Object contains the additional field IMO-INT-KEY in the active data table.
This field is required for optimizing SAP HANA and is hidden in queries
It can be used in a 3.x dataflow.
The complete history of a request is not saved. Only the start status and end status (relating to an activation) are saved.
SAP HANA – optimized info cube is a standard info cube that is optimized use with SAP HANA.
When you create SAP HANA – Optimized Info cubes, you can assign characteristics and key figures to dimensions
However, the system does not create any dimension tables apart from the package dimension.
The CSIDS (master data IDs) are written directly to the fact table.
This increases the system performance when loading data
Since dimension are unit led, it not necessary to create any DIM IDS (dimension keys)
The dimensions are simply used as a sort criterion and provide you with a clearer overview when creating a query in BEX Query designer
Consists of a Datastore object and a infocube with automatically generated dataflow in between
Combines mass data with the latest delta information at query runtime.
DSO object can be connected to a real-time data acquisition datastore/DTP
If the Data Store can provide appropriate delta information indirect access made a virtual provider lan be used instead of the DSO
Facilitates replication of DSO/virtual provider data to SAP net weaver BW Accelerator by switching off database persistency of the infocus
For an In-depth knowledge on SAP HANA, click on below