9th October, 2020
Ans: There are two types of loading, normal loading, and bulk loading. In normal loading, it loads record by record and writes log for that. It takes comparatively a longer time to load data to the target in normal loading. But in bulk loading, it loads a number of records at a time to target database. It takes less time to load data to the target.
Ans: The aggregator stores data in the aggregate cache until it completes aggregate calculations. When you run a session that uses an aggregator transformation, the Informatica server creates index and data caches in memory to process the transformation. If the Informatica server requires more space, it stores overflow values in cache files.
Ans: A transformation is a repository object that generates, modifies, or passes data. The Designer provides a set of transformations that perform specific functions. For example, an Aggregator transformation performs calculations on groups of data. Below are the various transformations available in Informatica:
Ans: The surrogate key is a substitution for the natural primary key. It is just a unique identifier or number of each row that can be used for the primary key to the table.
Ans: Informatica Repository: The Informatica repository is at the center of the Informatica suite. You create a set of metadata tables within the repository database that the Informatica application and tools access. The Informatica client and server access the repository to save and retrieve metadata.
Ans: Join data originating from the same source database. Filter records when the Informatica server reads source data. Specify an outer join rather than the default inner join specify sorted records. Select only distinct values from the source. Creating a custom query to issue a special SELECT statement for the Informatica server to read source data.
Ans: Specify the target load order based on source qualifiers in a mapping. If you have the multiple source qualifiers connected to the multiple targets you can designate the order in which the Informatica server loads data into the targets.
Ans: The union transformation is a multiple-input group transformation that can be used to merge data from various sources (or pipelines). This transformation works just like UNION ALL statement in SQL, which is used to combine the result set of two SELECT statements.
Ans: Yes, joiner transformation can be used to join data from two flat file sources.
Ans: This transformation is used to lookup data in a flat-file or a relational table, view, or synonym. It compares lookup transformation ports (input ports) to the source column values based on the lookup condition. Later returned values can be passed to other transformations.
Ans: Normalizer transformation which is used to normalize the data. Since COBOL sources often consist of Denormalized data.
Ans: The connected lookup takes input values directly from other transformations in the pipeline.
Unconnected lookup doesn't take inputs directly from any other transformation, but it can be used in any transformation (like expression) and can be invoked as a function using: LKP expression. So, an unconnected lookup can be called multiple times in a mapping.
3 types of data
Ans: Status code provides error handling for the Informatica server during the session. The stored procedure issues a status code that notifies whether or not the stored procedure completed successfully. This value can not be seen by the user. It only used by the Informatica server to determine whether to continue running the session or stop.
Ans: When you add a relational or a flat file source definition to a mapping, you need to connect it to a source qualifier transformation. The source qualifier transformation represents the records that the Informatica Server reads when it runs a session.
Ans: There are three types of dimensions available are :
Ans: Maplet is a set of transformations that you build in the maplet designer and you can use in multiple mappings
A session is a set of commands that describes the server to move data to the target.
A Batch is a set of tasks that may include one or more number of tasks (sessions, event wait, email, command, etc).
Ans: Dimensions that change over time are called Slowly Changing Dimensions(SCD).
Slowly Changing Dimension-Type 1: Which has only current records.
Slowly Changing Dimension-Type2: Which has current records + historical records.
Slowly Changing Dimension-Type3: Which has current records + one previous records.
Ans: Active Transformation: An active transformation can change the number of rows that pass through it from source to target i.e it eliminates rows that do not meet the condition in transformation.
Passive Transformation: A passive transformation does not change the number of rows that pass through it i.e it passes all rows through the transformation.
Ans: Aggregator transformation is an Active and Connected transformation. This transformation is useful to perform calculations such as averages and sums (mainly to perform calculations on multiple rows or groups).
Ans: Expression transformation is a Passive and Connected transformation. This can be used to calculate values in a single row before writing to the target.
Ans: Filter transformation is an Active and Connected transformation. This can be used to filter rows in a mapping that does not meet the condition.
Ans: Joiner Transformation is an Active and Connected transformation. This can be used to join two sources coming from two different locations or from the same location.
Ans: Lookup Transformations can access data from relational tables that are not sourced in mapping.
Ans: Normalizer Transformation is an Active and Connected transformation. It is used mainly with COBOL sources where most of the time data is stored in a denormalized format. Also, Normalizer transformation can be used to create multiple rows from a single row of data.
Ans: Rank transformation is an Active and Connected transformation. It is used to select the top or bottom rank of data.
Ans: Router transformation is an Active and Connected transformation. It is similar to filter transformation. The only difference is, filter transformation drops the data that do not meet the condition whereas the router has an option to capture the data that do not meet the condition. It is useful to test multiple conditions.
Ans: Sorter transformation is a Connected and an Active transformation. It allows to sort data either in ascending or descending order according to a specified field
Ans: Session Log ,Workflow Log,Errors Log, Badfile
Ans: A stored procedure transformation is an important tool for populating and maintaining databases.
Ans: Dynamic cache decreases the performance in comparison to the static cache.
Static cache do not see such things just insert data as many times as it is coming
It is a set of source and target definitions linked by transformation objects that define the rules for transformation.
It is a set of instructions that describe how and when to move data from source to target.
Ans: pmcmd is used to start a batch.
Ans: The Informatica server follows instructions coded into update strategy transformations within the session mapping determine how to flag records for insert, update, delete or reject.
Ans: The PowerCenter repository allows you to share metadata across repositories to create a data mart domain.
Ans: A parameter file is a file created by text editors such as word pad or notepad. U can define the following values in the parameter file.
Ans: Static cache Dynamic cache Persistent cache Shared cache Recache.
|Connected Lookup||Unconnected Lookup|
|Connected lookup participates in dataflow and receives input directly from the pipeline||Unconnected lookup receives input values from the result of a LKP: expression in another transformation|
|Connected lookup can use both dynamic and static cache||Unconnected Lookup cache can NOT be dynamic|
|Connected lookup can return more than one column value ( output port )||Unconnected Lookup can return only one column value i.e. output port|
|Connected lookup caches all lookup columns||Unconnected lookup caches only the lookup output ports in the lookup conditions and the return port|
|Unconnected lookup caches only the lookup output ports in the lookup conditions and the return port||Supports user-defined default values (i.e. value to return when lookup conditions are not satisfied)|
Ans: The centralized table in a star schema is called a fact table. Fact tables are three types
Additive non-additive semi-additive
Ans: According to Bill Inmon, known as the father of Data warehousing. “A Data warehouse is a subject-oriented, integrated,time-variant, nonvolatile collection of data in support of management’s decision-making process”.
Ans: After the load manager performs validations for the session, it creates the DTM process. The DTM process is the second process associated with the session run.
Ans: A transformation is a repository object that generates, modifies, or passes data. The Designer provides a set of transformations that perform specific functions.
Ans: Lookup transformation is Passive and it can be both Connected and UnConnected as well. It is used to lookup data in a relational table, view, or synonym. Lookup definition can be imported either from source or from target tables.
Ans: Maplet consists of a set of transformations that are reusable.
A reusable transformation is a single transformation that can be reusable.
Ans: Update strategy transformation is an active and connected transformation. It is used to update data in the target table, either to maintain a history of data or recent changes. You can specify how to treat source rows in table, insert, update, delete, or data-driven.
|Router transformation divides the incoming records into multiple groups based on some condition. Such groups can be mutually inclusive (Different groups may contain same record)||Filter transformation restricts or blocks the incoming recordset based on one given condition.|
|Filter transformation restricts or blocks the incoming recordset based on one given condition.||Filter transformation does not have a default group. If one record does not match filter condition, the record is blocked|
|Router acts like CASE.. WHEN statement in SQL (Or Switch().. Case statement in C)||Router acts like CASE.. WHEN statement in SQL (Or Switch().. Case statement in C)|
Ans: Informatica processes the source data row-by-row. By default, every row is marked to be inserted in the target table. If the row has to be updated/inserted based on some logic Update Strategy transformation is used. The condition can be specified in Update Strategy to mark the processed row for an update or insert.
The following options are available for update strategy :
|DD_INSERT||If this is used the Update Strategy flags the row for insertion. Equivalent numeric value of DD_INSERT is 0.|
|DD_UPDATE||If this is used the Update Strategy flags the row for update. Equivalent numeric value of DD_UPDATE is 1.|
|DD_DELETE||If this is used the Update Strategy flags the row for deletion. Equivalent numeric value of DD_DELETE is 2.|
|DD_REJECT||If this is used the Update Strategy flags the row for rejection. Equivalent numeric value of DD_REJECT is 3.|
Ans: Aggregator performance improves dramatically if records are sorted before passing to the aggregator and the "sorted input" option under aggregator properties is checked. The recordset should be sorted on those columns that are used in Group By operation.
Ans: Informatica Lookups can be cached or uncached (No cache). And Cached lookup can be either static or dynamic. A static cache is one that does not modify the cache once it is built and it remains the same during the session run. On the other hand, A dynamic cache is refreshed during the session run by inserting or updating the records in the cache based on the incoming source data. By default, the Informatica cache is a static cache.
Lookup cache can also be divided as persistent or nonpersistent based on whether Informatica retains the cache even after the completion of session run or deletes it
Ans: A target table can be updated without using 'Update Strategy'. For this, we need to define the key in the target table at the Informatica level and then we need to connect the key and the field we want to update in the mapping Target. In the session level, we should set the target property as "Update as Update" and check the "Update" check-box.
Ans: We can configure a Lookup transformation to cache the underlying lookup table. In the case of static or read-only lookup cache, the Integration Service caches the lookup table at the beginning of the session and does not update the lookup cache while it processes the Lookup transformation.
In the case of a dynamic lookup cache, the Integration Service dynamically inserts or updates data in the lookup cache and passes the data to the target. The dynamic cache is synchronized with the target.
In case you are wondering why do we need to make lookup cache dynamic, read this article on dynamic lookup
Let's assume we have a target table "Customer" with fields as "Customer ID", "Customer Name" and "Customer Address". Suppose we want to update "Customer Address" without an Update Strategy. Then we have to define "Customer ID" as the primary key in the Informatica level and we will have to connect Customer ID and Customer Address fields in the mapping. If the session properties are set correctly as described above, then the mapping will only update the customer address field for all matching customer IDs.
Ans: This is because we can select the "distinct" option in the sorter property.
When the Sorter transformation is configured to treat output rows as distinct, it assigns all ports as part of the sort key. The Integration Service discards duplicate rows compared during the sort operation. The number of Input Rows will vary as compared with the Output rows and hence it is an Active transformation.
Ans: From Informatica 9x, Lookup transformation can be configured as as "Active" transformation.
Ans: When we issue the STOP command on the executing session task, the Integration Service stops reading data from the source. It continues processing, writing, and committing the data to targets. If the Integration Service cannot finish processing and committing data, we can issue the abort command.
In contrast, the ABORT command has a timeout period of 60 seconds. If the Integration Service cannot finish processing and committing data within the timeout period, it kills the DTM process and terminates the session.
TekSlate is the best online training provider in delivering world-class IT skills to individuals and corporates from all parts of the globe. We are proven experts in accumulating every need of an IT skills upgrade aspirant and have delivered excellent services. We aim to bring you all the essentials to learn and master new technologies in the market with our articles, blogs, and videos. Build your career success with us, enhancing most in-demand skills .
Write For Us