To prevent from known and unknown error at rentine and doing our user defined action is called error handlin

2 we need to set error level for handling the error



.Set error level<error code>Severity <Integer>



.Set error level 6784 severity 4

.Set error level 807severity 8

.Set error level 2121 severity 12

.Set error level 5678 (or)unknown severity 16



HIGHEST Return code=0 means script success

/*Rum File c:\BTEQ\conn. TxT;*/


.Set session 4;

Database vinayaka;

.Set error level 3807 Severity 4

.Set error level 6066 Severity 8

Select*From party22;

Insert into party(1,’vinay’,34567865);

.IF error level =4 then insert into Translog1(‘vubat1’’Table is ’not created”);

Insert in to party 22(12345678912345678,’vinay’);

.If error level -8 then. Remark ’Range invalid,’ don’t load the table;





  1. What is the main usage of BTEQ?
  2. There is file with 100 Records I want to load 60 Records without loading the 1st 20 Records



Skip =20 , Repeat =60

  1. By using skip=20 and repeating =60 there is a table with 10000 Records, but I Require Top 10 income persons Details in the file

1) Select

Order by PI Desc:



4)How do you import multiple files And multiple tables

Ans: .LOGON By writing multiple import statement we perform the operation



4)How do you import multiple files and multiple tables

Ans .logon

Import infile file1


Insert/update <statement1>

Import infile file2


insont /undate<statey>


5).General error in the Lab?

1.The file specified does not exit

Ans:- If the input file path/script path  wrong we get the this types of error


2.Record length not matched with structure (date parcel does not match)

Ans:- We get this at the time of  in the below scenario

a)If multiple end of the file marks are those

b)If the using class structures no. of column and data structure.

c)Not matching input record effect


6. the values specified




  1. If is bulk load facility which loads large volume of data(High value of data)
  2. Compare to all utilities it Runs FASTER
  3. It runs in 2 phases
  4. It capture the errors records in to error table
  5. It is fully automatic Restartable and check point configurable
  6. It loads only one table at a time and in the table should be empty
  7. It does not support duplicate records[NUPT OR multiset duplicate]
  8. It process the data block by block
  9. It performs insert operation between begin loading and end loading[Before begin loading we can write DROP, Delete, create etc commands]
  10. It support input module programming[INMOD]
  11. Maximum 15 FEXP Script we can execute at a time



  1. Start 2 Run  2 FAST LOAD
  2. Start  2  programs  2  Tera data Client  2 Tera data fast load



It uses error table and one log table


a)ERROR Table one

Generally the below error move into error Table one

  1. Data conversion error
  2. Constrains violation
  3. Unavailable Amp error

Structure: Party




b)Error table Two

If the table is having UPI And we try to store duplicate Records and those Records move into error table2


Similar to Target table



2FAST Load Internal take log table for implementing Restart ability

2Whenever Script fail the system stores the failed point in the fast log table

2After rectifying the error, if we restart FAST Load script it starts from the last point in the fast log table

2Once script executed succeed to content in the fast log table related to target table deleted

2This table is available systadmin




Number of records in Source file= Number of Records Loaded to target(2)

+Number of Records loaded error  Table(1

+Number of Records Loaded error table(1)

+Number of duplicate records log file(1)






.Set Record<Records Format>


<Create/DROP/Delete Commands>

Begin loading<Data base name. table name> 2 phase1 starts

Error files <Error table1>,< Error table2>

Check point<Integer>




End Loading 2 phases starts




Real time usages of FASTLOAD

  1. To load data in to empty table at the time of initial load
  2. To load data into stage table[Truncate and load tables]


Check point

  1. Check point is either Records based or Time based
  2. We need to take check point where, it is happening for every 10 to15 minutes
  3. It provides better Restorability


No check point

{-1-1core} 2 reads 30 minutes

{1- core} 2 write 25 minutes

2After Rectifying the error if you start, it start from 1st Record ”Reading”

2While writing failed at 99,99,998th  Record







Check points – Available

10 Lacks       check point

{1- 10L(Read)}   {1- 10L(write)} 2 1 checkpoint

{10L +1-20L(Read)}{101 - 202} 2 2 check point



{90 L+1- 1 Crore (Read)} {90 C+ 1- 1 Crore(Fail)}2 9 check point


2After Rectifying the error if you start, it start from last Record 90L+ Reading[Because of checkpoint in Log table]



If records<41c then check point 100000

If records>41c then check point 50000