Informatica
Scenario Based Interview Problems with Answers - Part 1
Q 1. How to
generate sequence / incremental numbers in Informatica?
Solution 1: In the expression
transformation, create a variable port and increment it by 1. Then
assign the variable port to an output port.
In the expression
transformation, the ports are:
V_COUNT=V_COUNT+1
O_COUNT=V_COUNT
Solution 2: Insert a sequence
generator transformation drag NEXTVAL port from sequence generator to
expression.This port will give you the incremental numbers.
Use Start Value property to 1 and Increment By property to 1 for a series
like 1,2,3,4,5,6,7,8.....
Q 2. Design a
Informatica mapping to load the First Half rows from a Relational table
into a target?
Solution: In case of relational
table you can go to source qualifier go to property and write the
SQL query like
SELECT * from EMP
WHERE ROWNUM <= (SELECT Count(*)/2 FROM EMP)
No additional filter
required connect all ports to other transformation and finally to
target, then run the mapping.
Q 3. Design a
Informatica mapping to load the Second Half rows from a Relational
table into a target?
Solution: In case of relational
table you can go to source qualifier go to property and write the
SQL query like
SELECT * from EMP
MINUS
SELECT * from EMP
WHERE ROWNUM <= (SELECT Count(*)/2 FROM EMP)
No additional filter
required connect all ports to other transformation and finally to
target, then run the mapping.
Q 4. Design a
Informatica mapping to load the first 4 rows from a flat file into a
target?
Solution: Now that the above query
wont work for a flat file source then you have to assign row numbers to
each record. Generate the row numbers either using the expression
transformation as mentioned above or use sequence generator
transformation.Then pass the output to filter transformation and
specify the filter condition as ROW_NUMBER <=4
ID | COL1 | COL2 | ROW_NUMBER |
1 | A | 10 | 1 |
2 | B | 20 | 2 |
3 | C | 30 | 3 |
4 | D | 10 | 4 |
5 | E | 30 | 5 |
6 | F | 10 | 6 |
Q 5. Design a
Informatica mapping to load the last 3 rows from a flat file into a
target?
Solution: Consider the source has
the following data.
COLUMN
A
B
C
D
E
Step1: You have to assign row
numbers to each record. Generate the row numbers using the expression
transformation as mentioned above and call the row number generated
port as O_COUNT. Create a DUMMY output port in the same expression
transformation and assign 1 to that port. So that, the DUMMY output
port always return 1 for each row.
In the expression
transformation, the ports are
V_COUNT=V_COUNT+1
O_COUNT=V_COUNT
O_DUMMY=1
The output of
expression transformation will be
COL | O_COUNT | O_DUMMY |
A | 1 | 1 |
B | 2 | 1 |
C | 3 | 1 |
D | 4 | 1 |
E | 5 | 1 |
Step2: Pass the output of
expression transformation to aggregator and do not specify any group by
condition. Create an output port O_TOTAL_RECORDS in the aggregator and
assign O_COUNT port to it. The aggregator will return the last row by
default. The output of aggregator contains the DUMMY port which has
value 1 and O_TOTAL_RECORDS port which has the value of total number of
records in the source.
In the aggregator
transformation, the ports are
O_DUMMY
O_COUNT
O_TOTAL_RECORDS=O_COUNT
The output of
aggregator transformation will be
O_TOTAL_RECORDS |
O_DUMMY |
5 | 1 |
Step3: Pass the output of
expression transformation, aggregator transformation to joiner
transformation and join on the DUMMY port. In the joiner transformation
check the property sorted input, then only you can connect both
expression and aggregator to joiner transformation.
In the joiner
transformation, the join condition will be
O_DUMMY (port from
aggregator transformation) = O_DUMMY (port from expression
transformation)
The output of joiner
transformation will be
COL | O_COUNT | O_TOTAL_RECORDS |
A | 1 | 5 |
B | 2 | 5 |
C | 3 | 5 |
D | 4 | 5 |
E | 5 | 5 |
Step4: Now pass the output of
joiner transformation to filter transformation and specify the filter
condition as O_TOTAL_RECORDS (port from aggregator)-O_COUNT(port from
expression) <=2
In the filter
transformation, the filter condition will be
O_TOTAL_RECORDS -
O_COUNT <=2
The output of filter
transformation will be
COL | O_COUNT | O_TOTAL_RECORDS |
C | 3 | 5 |
D | 4 | 5 |
E | 5 | 5 |
Q 6. Design a
Informatica mapping to load record from a flat file, first record into
one table A, the last record from the flat file into table B and the
remaining records into table C?
Solution: This is similar to the
above problem; the first three steps are same. In the last step instead
of using the filter transformation, you have to use router
transformation. In the router transformation you have to create two
output groups.
In the first group,
the condition should be O_COUNT=1 and connect the corresponding output
group to table
A. In the
second group, the condition should be O_COUNT=O_TOTAL_RECORDS and connect the
corresponding output group to table B. The output of default group should be
connected to table C.
Q 7. Consider the
following source Data.
A
B
C
C
B
D
B
Q 7.1 Design a
mapping to load all unique products in one table and the duplicate rows
in another table.
The first table should
contain the following output
A
D
The second target
should contain the following output
B
B
B
C
C
Solution: Use sorter transformation
and sort the products data. Pass the output to an expression
transformation and create a dummy port O_dummy and assign 1 to that
port. So that, the DUMMY output port always return 1 for each row.
The output of
expression transformation will be
Product | O_dummy |
A | 1 |
B | 1 |
B | 1 |
B | 1 |
C | 1 |
C | 1 |
D | 1 |
Pass the output
of
expression transformation to an aggregator transformation. Check the
group by on product port. In the aggregator, create an output port
O_count_of_each_product and write an expression count(product).
The output of
aggregator will be
Product | O_count_of_each_product |
A | 1 |
B | 3 |
C | 2 |
D | 1 |
Now pass the
output of
expression transformation, aggregator transformation to joiner
transformation and join on the products port. In the joiner
transformation check the property sorted input, then only you can
connect both expression and aggregator to joiner transformation.
The output of joiner
will be
Product | O_dummy | O_count_of_each_product |
A | 1 | 1 |
B | 1 | 3 |
B | 1 | 3 |
B | 1 | 3 |
C | 1 | 2 |
C | 1 | 2 |
C | 1 | 1 |
Now pass the
output of
joiner to a router transformation, create one group and specify the
group condition as O_dummy=O_count_of_each_product. Then connect this
group to one table. Connect the output of default group to another
table.
Q 7.2 Design a
Informatica mapping to load original and duplicate records in two
different tables / Separating duplicate and non-duplicate rows.
The first table should
contain the following output
A
B
C
D
The second table
should contain the following output
B
B
C
Solution1: Use sorter transformation
and sort the products data. Pass the output to an expression
transformation and create a variable port,V_Current_product, and assign
product port to it. Then create a V_Count port and in the expression
editor write
V_Count=IIF(V_Current_product=V_Previous_product,V_Count+1,1). Create
one more variable port V_Previous_product and assign product port to
it. Now create an output port O_Count port and assign V_Count port to
it.
In the expression
transformation, the ports are
Product
V_Current_product=product
V_Count=IIF(V_Current_product=V_Previous_product,V_Count+1,1)
V_Previous_product=product
O_Count=V_Count
The output of
expression transformation will be
Product | O_Count |
A | 1 |
B | 1 |
B | 2 |
B | 3 |
C | 1 |
C | 2 |
D | 1 |
Now Pass the
output of
expression transformation to a router transformation, create one group
and specify the condition as O_count=1. Then connect this group to one
table. Connect the output of default group to another table.
Solution2: Next approach to do this is
use aggregator, First sort the data using sorter then connect it to
aggregator transformation group by product with an additional port
COUNT_RECORD in expression put COUNT(Product).
Now with a router
transformation create two groups namely DUPLICATE & ORIGINAL and
give the group condition COUNT_RECORD > 1 & COUNT_RECORD =1
respectively.
In the Aggregator
transformation ports are
Product
COUNT_RECORD=COUNT(Product)
In the Router
Transformation group Conditions are
DUPLICATE group
(COUNT_RECORD>1)
ORIGINAL group
(COUNT_RECORD=1)
Connect Table 1 to
DUPLICATE group and Table 2 to Original Group. There you go you have
duplicate and original data separated.
For more Questions Continue to Part 2
Woww good post to refer at the time of interview and the now i prepared that all question and answers in my next interview. I hope that it doesn't sadden me as much as this one.
ReplyDeleteInformatica Training in Chennai
Interesting and Useful interview questions. Thank you for sharing with us.
DeleteBest informatica Training in Chennai
Informatica Course in Chennai
Really it is very good blog. I am searching for this type interview questions, i did saw number of blogs for interview questions, i did get but not like this. really this blog very helpful to candidates i.e.,who want to attend interview as a experienced. Informatica Online Training in Hyderabad Informatica Online Training in USA Informatica Online Training in INDIA
ReplyDeleteVery good collection of question and answers. Know more about Informatica Online Training
ReplyDeletevery nice post wondraful website informatica gives the so many about the software developing.the collection of answare is useful for interview preparation .you did good job .thanks for informatica.
ReplyDeleteInformatica Training in Chennai
Thanks for the blog. I was asked two of the above questions in the interview. Answers have been explained well.
ReplyDeletevery informative blog and useful article thank you for sharing with usInformatica Online Course India
ReplyDeleteFor Q5, the answer is a very technical one no doubt. But I would have used a tail -3 > temp.txt, in the pre-session command task, to write into a temporary file and then process it normally through the session.
ReplyDeletevery nice post wondraful website informatica gives the so many about the software developing.the collection of answare is useful for interview preparation .you did good job .thanks for informatica.
ReplyDeleteQlikview Training From India
Application Packagining Training From India
Python Training From India
The knowledge of technology you have been sharing thorough this post is very much helpful to develop new idea. here by i also want to share this.
ReplyDeleteOnline IT Self Placed Videos
Sales Force Self Placed Videos
My Sql Self Placed Videos
Tableau Self Placed Videos
Mulesoft Self Placed Videos
Really great blog, it's very helpful and has great knowledgeable information.
ReplyDeleteActive Directory Online Training
RPA Online Training
APPIUM Online Training
ORACLE EXADATA Online Training
PYTHON Online Training
It is better to engaged ourselves in activities we like. I liked the post. Thanks for sharing.
ReplyDeleteJava training in Chennai | Java training in Annanagar | Java training in Chennai
Java training in Chennai | Java training in Bangalore | Java training in Electronic city
Its really an Excellent post. I just stumbled upon your blog and wanted to say that I have really enjoyed reading your blog. Thanks for sharing....
ReplyDeleteonline Python training
python training in chennai
Its really an Excellent post. I just stumbled upon your blog and wanted to say that I have really enjoyed reading your blog. posts. I hope you post again soon.
ReplyDeleteDevops Training in Chennai | Devops Training Institute in Chennai
I’ve been searching for some decent stuff on the subject and haven't had any luck up until this point, You just got a new biggest fan!..
ReplyDeletedata analytics course malaysia
Hi,
ReplyDeleteGood job & thank you very much for the new information, i learned something new. Very well written. It was sooo good to read and usefull to improve knowledge. Who want to learn this information most helpful. One who wanted to learn this technology IT employees will always suggest you take python training in bangalore. Because Python course in Bangalore is one of the best that one can do while choosing the course.
I think there is a need to look for some more information on the aspects of Informatica since it actually is the best way to minimize coding and find out some solutions to tough problems.
ReplyDeleteInformatica Read Rest API
Thank you so much for the post.The content is very useful.I always love your posts because of the information and knowledge one can gain by reading your post.Thanks for sharing and keep updating still more.
ReplyDeleteBest Python Training in BTM Layout
This comment has been removed by the author.
ReplyDelete
ReplyDeleteNice information, this is will helpfull a lot, Thank for sharing, Keep do posting i like to follow this
Interesting blog, here a lot of valuable information is available, it is very useful information Keep do posting i like to follow this informatica online training
informatica online course
informatica bdm training
informatica developer training
informatica training
informatica course
informatica axon training
online informatica course
informatica developer course
I really appreciate the effort you made to share the knowledge. The topic here I found was really effective… Best Hacking Websites in Hindi
ReplyDeleteBLINK HTML GOOGLE TRICK
Turbo VPN VIP Apk link Click here to Latest Download
This information will really help me in future.
ReplyDeleteData science course in pune
Splunk online training
ReplyDeleter programming online course
react online course
Sap ariba online training
qlikview online training
etl testing online training
power bi online training
Sap ehs online training
Sap erp online training
Fantastic blog, really nice blog, and useful to all. Informative and knowledgeable content. Thanks for sharing this blog with us. Keep sharing more stuff like this.
ReplyDeleteData Science Institutes in Hyderabad
This is an excellent post I see thanks to sharing it. It is really what I wanted to see hope in future you will continue for sharing such a excellent post.
ReplyDeleteCyber Security Course
we are giving online IT and non IT courses with placement support. Get your dream job in just 3 months
ReplyDeletewe are giving online IT and non IT courses with placement support. Get your dream job in just 3 months. Our trainners having 11 years of experience in both IT and non IT courses. we have digital marketing experts also who has above 10 years of experience in digital marketing feild.
training and placement courses in bangalore
Courses
Machine Learning
Digital Marketing
AWS
Python
Data Science
Medical Coding
Cyber Security
MuleSoft
Java-Testing
Dotnet & SQL
Cloud computing
Salesforce
Informatica is a comprehensive data integration and management software suite widely used in the field of data warehousing and business intelligence. It provides a robust platform for extracting, best cheap camera for photography, transforming, and loading (ETL) data from various sources into a target data warehouse or database.
ReplyDeleteThe "Complete Reference to Informatica" is a valuable resource for individuals seeking in-depth knowledge and understanding of Informatica's features, functionalities, and best practices. This reference guide serves as a comprehensive manual, covering all aspects of Informatica, from installation and configuration to advanced transformations and data quality management.
By delving into the "Complete Reference to Informatica," readers can explore Informatica's various components and gain insights into its architecture and design principles. The guide typically covers topics such as:
Introduction to Informatica: Providing an overview of Informatica's role in data integration and the benefits it offers to organizations.
Informatica PowerCenter: Detailing the core component of Informatica, which enables the design, development, and execution of ETL processes.
Data Transformation: Exploring the transformations available in Informatica PowerCenter to manipulate and enrich data during the extraction and loading process.
Workflow Manager: Demonstrating how to create and manage workflows, which define the execution order and dependencies of tasks within an ETL process.
Performance Optimization: Offering insights into optimizing the performance of Informatica workflows and mappings for efficient data processing.
Data Quality and Profiling: Discussing Informatica's capabilities for data profiling, cleansing, and data quality management, ensuring that the data is accurate, consistent, and reliable.
Error Handling and Exception Management: Covering techniques for handling errors and exceptions that may occur during the ETL process, ensuring data integrity and robustness.
The "Complete Reference to Informatica" serves as a comprehensive guide for both beginners and experienced professionals in the field of data integration. It provides step-by-step instructions, examples, and practical insights to help readers navigate through the complexities of Informatica and leverage its full potential.
Whether you are a data engineer, ETL developer, or a business intelligence professional, this reference guide can be an invaluable companion in your Informatica journey. It equips you with the knowledge and skills needed to design, Best travel cameras, develop, and manage efficient and reliable data integration processes.
Recently I saw your blog, thanks for the information. Internet Marketing
ReplyDeleteGreat Information.
ReplyDeleteAlso,check Full-stack training in Nagpur
Thank you for sharing.This blog is so informative I really like your blog.Deep Learning for Predictive Data Analytics: A Path to Excellence
ReplyDelete