Are you planning to attend an interview for SAP BW on Hana Developer role but confused on how to crack that interview and also what would be the most probable SAP BW on Hana interview questions that the interviewer may ask? Well, you have reached the right place. Tekslate has collected the most frequently asked SAP BW on Hana interview questions which are often asked in multiple interviews.
Get the SAP BW on HANA Training from Tekslate for grabbing the best jobs
SAP BW on HANA Interview Questions
Q1. What is the time distribution option in the update rule?
Ans: This is to distribute data per time; for example, if the source contains calendar week and the target contains calendar day, the data is a spit for each calendar day. you can select either the normal calendar or the factory calendar.
Q2. What is the language SAP HANA is developed in?
Ans: The SAP HANA database is developed in C++.
Q3. What is ad hoc analysis?
Ans: In traditional data warehouses, such as SAP BW, a lot of pre-aggregation is done for quick results. That is the administrator (IT department) decides which information might be needed for analysis and prepares the result for the end-users. This results in fast performance but the end-user does not have flexibility.
The performance reduces dramatically if the user wants to do an analysis of some data that is not already pre-aggregated. With SAP HANA and its speedy engine, no pre-aggregation is required. The user can perform any kind of operation in their reports and does not have to wait for hours to get the data ready for analysis.
Q4. What are the types of attributes?
Ans: Display only and navigational; display only attributes are only for display and no analysis can be done; navigational attributes behave like regular characteristics; for example, assume that we have a customer characteristics with the country as a navigational attribute; you can analyze the data using customer and country.
Q5. What are the row-based and column-based approach?
- It is the traditional Relational Database approach
- It stores a table in a sequence of rows
Column based tables:
- It stores a table in a sequence of columns i.e. the entries of a column are stored in contiguous memory locations.
- SAP HANA is particularly optimized for column-order storage.
- SAP HANA supports both the row-based and column-based approaches.
Want to perform complex queries efficiently, enroll in our SAP BW on HANA Training
SAP BW Scenario-Based Interview Questions
Q6. Can you partition a cube that has data already?
Ans: No; the cube must be empty to do this; one workaround is to make a copy of the cube A to cube B; export data from A to B using export data source; empty cube A; create a partition on A; re-import data from B; delete cube B.
Q7. How does SAP HANA support Massively Parallel Processing?
- With the availability of Multi-Core CPUs, higher CPU execution speeds can be achieved.
- Also, HANA Column-based storage makes it easy to execute operations in parallel using multiple processor cores.
- In a column store data is already vertically partitioned. This means that operations on different columns can easily be processed in parallel. If multiple columns need to be searched or aggregated, each of these operations can be assigned to a different processor core.
- In addition, operations on one column can be parallelized by partitioning the column into multiple sections that can be processed by different processor cores. With the SAP HANA database, queries can be executed rapidly and in parallel.
Q8. What is the DIM ID?
Ans: DIM ID: are used to connect fact tables and dimension tables
Q9. What is an info source?
Ans: A structure consisting of InfoObjects without persistence for connecting two transformations
Q10. What are the steps to load a non-cumulative cube?
Ans: The following steps to load a non-cumulative cube
- Initialize opening balance in R/3 (S278)
- activate extract structure MCO3BFO for data source 2LIS_03_BF
- set up historical material documents in R/3
- load opening balance using data source 2LIS_40_S278
- load historical movements and compress without marker update.
- set up V3 update
- load deltas using 2LIS_03_BF.
Q11. What are the major differences between the row store and column store?
|Property||Row Store||Column Store||Reason|
|Transactions||Faster||Slower||Modifications require updates to multiple columnar tables|
|Analytics||Slower even if indexed||Faster||Smaller dataset to scan, inherent indexing|
SAP HANA interview questions for 5 years experience
Q12. What is a transnational info cube?
Ans: These cubes are used for both reading and write; standard cubes are optimized for reading. The transactional cubes are used in SEM.
Q13. Give example data sources supporting this?
Ans: 2LIS_03_BF and 2LIS_03_UM
Q14. What is the source system?
Ans: Any system that is sending data to BW like R/3, lat file, oracle database, or external systems.
Q15. Can you disable the cache?
Ans: Yes, either globally or using query debug tool RSRT.
Q16. What is the global transfer, Nile?
Ans: This is a transfer routine (ABAP) defined at the info object level; this is common for all source systems.
Q17. What is a multi-provider in SAP BI? What are the features of Multi-providers?
Ans: Multi-provider is a type of info-provider that contains data from a number of info-providers and makes it available for reporting purposes.
Multi-provider does not contain any data. The data comes entirely from the info providers on which it is based. The info-providers are connected to one another by union operations. Info-providers and Multi-providers are the objects or views relevant for reporting. A multi-provider allows you to run reports using several info-providers that are, it is used for creating reports for one or more than one info-provider at a time.
Q18. How can you convert an info package group into the process chain?
Ans: You can convert the package group into a process chain by double-clicking on the info package group, then you have to click on the ‘ Process Chain Maint ‘ button where you have to type the name and description, this will insert individual info packages automatically.