We have moved to www.dataGenX.net, Keep Learning with us.

Thursday, November 14, 2013

DataStage Server Hang Issues & Resolution




Server hang issue can occurred when

1) Metadata repository database detects a deadlock condition and choose failing job as the victim of the deadlock.
2) Log maintenance is ignored.
3) Temp folders are not maintained periodically.

I will try to explain above three points in detail below:
1) Occurrence of deadlock into metadata repository database –
                                 I have seen this scenario in the DataStage 8.1 with no fix packs installed in it. If you have fix packs (released later) installed then you may not get this problem at all. IBM Information Server throws an exception like "[IBM][SQLServer JDBC Driver][SQLServer]Transaction (Process ID 59) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction." This exception makes the whole process unresponsive which in turn hangs the server and needs a physical reboot. However deadlock would always be there and we can only reduce their occurrence. Following two actions can be performed to reduce the possibility of having deadlock situation:

a) Keep the user interaction minimized during the transactions. This is more related to SQL queries where user input is required.
b) Keep less number of records and in one batch. For example if you are using Oracle Enterprise stage to load data then by defining less array size would reduce the possibility of having deadlock during the job run.

2) Log maintenance is ignored - In DataStage 8.1 log gets stored into metadata repository database. A table called "logging_xmetagen_LoggingEvent1466cb5f" holds all the logs in XMETA database. If we ignore purging the logs periodically then this table's size would go very large and DMLs would take longer to execute. Hence we should schedule the log purging/clearing periodically. Logs purging/clearing can be scheduled from Director client.
3) Folders maintenance - There are few folder which needs maintenance in DataStage in windows environment. First one is tmp folder which store all the virtual dataset and other stuff while executing the jobs. Another one is UVtemp. We should make sure that this folder should not get full as this would cause the jobs to hang or slow. Keep clearing this folder periodically. Last but not the least is &PH& folder. Keep this folder clean as well.
To remove the entries from &PH& folder "CLEAR.FILE &PH&" command can be executed from administrator client. To execute this command go to Administrator client --> Click on Projects tab --> Select the appropriate project --> Click on Command --> Paste CLEAR.FILE &PH& into command text box and click on Execute button. Once cleared you will see a popup window with a message "File &PH& has been cleared".

To schedule the auto log purge, select a job and then by selecting Job menu --> Clear Log --> options from pop up window. You can set the logs to purge immediately after job run or set auto purge up to some last run or till a particular date. It would be better if we set the auto purge with "Older than (days)" setting.

More Details ->
What can you delete to free up disk space in IBM InfoSphereInformation Server
Deleting temporary lookuptable files in IBM InfoSphereDataStage
Managing and Deleting Persistent Data Sets within IBMInfoSphere Datastage
Commands to delete files in &PH& directories in IBMInfoSphere DataStage
DataStage Jobs hang or have slow performance when tempdirectories have many files


One more thing to note here is that the table "logging_xmetagen_LoggingEvent1466cb5f" keep logs for all the projects present on the server. Hence we should not just fire a DML on this table as this could impact other projects as well. To remove entries from this table we can use the following command which creates a scheduler on the server using the logadmin utility and delete the logs.

This command will delete the logs from table "logging_xmetagen_LoggingEvent1466cb5f" irrespective to the project hence be careful while executing this command.

LoggingAdmin -user [DataStage Admin User] -Password [DataStage Admin Password] -create -schedule -name "DS job log purge" -frequency –minutes 30 -threshold 10000 -percentage 100 -includeCategories IIS-DSTAGE-RUN

Above scheduler can be deleted by following command:

LoggingAdmin –user [DataStage Admin User] -password [DataStage Admin Password] -delete -schedule -name "DS job log purge"

To check the number of records in "logging_xmetagen_LoggingEvent1466cb5f" table following query can be fired on XMETA database:

select count(*) from xmeta.logging_xmetagen_LoggingEvent1466cb5f

Having lots of logs into repository database would slow the log purge activity which in turn slows the performance of the job.

More Details ->

In DatStage 8.1 job log storage has changed from file system to database. In case your jobs logs gone very huge and you need some quick performance then you can switch back to the file system logging by just changing the following two project level parameters in DSparam in windows:

RTLogging=1
ORLogging=0

After changing the above settings logs would be stored in the file system and old logs can still be viewed into director client. Once we clear all the logs we should revert the changes to original setting. DSparam file can be found at \IBM\InformationServer\Server\Projects\YOUR_PROJECT\

If we follow above listed maintenance activities then we can increase the performance of the jobs as well as keep the server safe from going into hung state.

More Details ->



No comments :

Post a Comment