Solution: datapump to the rescue :-)
While regular exp/imp utilities do not handle well compressed tables, expdp will export the table, while keeping it compressed. Sample of parfile used for the export:
userid=mydbuser/passwd
dumpfile=mytab_extract.dmp
logfile=mytab_extract.exp.log
parallel=16
tables=mytab
directory=big_dmp
compression=ALL
Conclusion: expdp is handling very well compressed tables, generating a dump file similar in size with the size of the compressed table itself.
As a follow up to this post, what about if we'd like to import data that we've just exported, while keeping it compressed, to save space and also time during import?
ReplyDeleteIn this case, if the table already exists at the target, we'll make sure it is compressed:
SQL>alter table mytab compress;
If the table doesn't exist yet, we'll need to create it first, using impdp, with option content=metadata_only and then run the alter table above.
The last step will be to run impdp as we'd normally do, there is no compression parameter for impdp and the default is to "uncompress" the table.