Something about DataStage, DataStage Administration, Job Designing,Developing, DataStage troubleshooting, DataStage Installation & Configuration, ETL, DataWareHousing, DB2, Teradata, Oracle and Scripting.
Monday, February 04, 2013
14 Good design tips in Datastage
1) When you need to run the same sequence of jobs again and again, better create a sequencer with all the jobs that you need to run. Running this sequencer will run all the jobs. You can provide the sequence as per your requirement.
2) If you are using a copy or a filter stage either immediately after or immediately before a transformer stage, you are reducing the efficiency by using more stages because a transformer does the job of both copy stage as well as a filter stage
3) Use Sort stages instead of Remove duplicate stages. Sort stage has got more grouping options and sort indicator options.
4) Turn off Runtime Column propagation wherever it’s not required.
5) Make use of Modify, Filter, and Aggregation, Col. Generator etc stages instead of Transformer stage only if the anticipated volumes are high and performance becomes a problem. Otherwise use Transformer. It is very easy to code a transformer than a modify stage.
6)Avoid propagation of unnecessary metadata between the stages. Use Modify stage and drop the metadata. Modify stage will drop the metadata only when explicitly specified using DROP clause.
7)Add reject files wherever you need reprocessing of rejected records or you think considerable data loss may happen. Try to keep reject file at least at Sequential file stages and writing to Database stages.
8)Make use of Order By clause when a DB stage is being used in join. The intention is to make use of Database power for sorting instead of Data Stage resources. Keep the join partitioning as Auto. Indicate don’t sort option between DB stage and join stage using sort stage when using order by clause.
9)While doing Outer joins, you can make use of Dummy variables for just Null checking instead of fetching an explicit column from table.
10)Data Partitioning is very important part of Parallel job design. It’s always advisable to have the data partitioning as ‘Auto’ unless you are comfortable with partitioning, since all Data Stage stages are designed to perform in the required way with Auto partitioning.
11) Do remember that Modify drops the Metadata only when it is explicitly asked to do so using KEEP/DROP clauses.
12) Range Look-up: Range Look-up is equivalent to the operator between. Lookup against a range of values was difficult to implement in previous Data Stage versions. By having this functionality in the lookup stage, comparing a source column to a range of two lookup columns or a lookup column to a range of two source columns can be easily implemented.
13) Use a Copy stage to dump out data to intermediate peek stages or sequential debug files. Copy stages get removed during compile time so they do not increase overhead
14)Where you are using a Copy stage with a single input and a single output, you should ensure that you set the Force property in the stage editor TRUE. This prevents DataStage from deciding that the Copy operation is superfluous and optimizing it out of the job.
Need More, You can find --> HERE