Tuesday, 7 March 2017

Using "sar" to check the CPU utilization, current and historical data


For example, to get the CPU for day #6 of the month, only top 30 CPU entries:

>sar -f /var/log/sa/sa06|awk -F" " '{print $4}'|sort -n|tail -30



15.84
19.12
19.64
22.70
23.79
24.06
31.20
32.69
35.55
36.79
37.41
39.59
40.39
43.41
44.62
45.28
46.34
47.20
47.63
60.93
63.28
64.77
64.80
65.31
67.07
67.64
68.44
69.44
69.67

71.79


 How to check the CPU right now? Below is running "live" every 2 seconds, for 10 seconds.

>sar -u 2 10

Tuesday, 28 February 2017

Linux: how to change the default file name generated by the "gzip" command?

Sometimes you want to be able to control the output file of the gzip command.
This can be done as follows:

1) >gzip -S .suf My_File.dmp
2) >mv My_File.dmp.suf My_File.dmp_2.gz

Tuesday, 17 January 2017

How to find out if the statistics are locked for a specific table?

The query which might help is:

select owner, table_name, stattype_locked
from dba_tab_statistics
where stattype_locked is not null;

Thursday, 10 November 2016

How much memory is free on my Linux box?

 [109]$ free -g -t -o
             total       used       free     shared    buffers     cached
Mem:            47         44          2          0          0         40
Swap:            1          0          1
Total:          49         45          3

 In this case, 40 GB are cached, ready to be used.

Thursday, 6 October 2016

Linux: I've deleted my file , but it is still in use?? lsof to the rescue

Example:

[steaua@MYDB]/u01/app/oracle/users/Florin >lsof |grep oradata |grep deleted
oracle     4028    oracle  300u      REG             253,69     2105344     49160 /oradata/ora_data05/refusg_MYDB_02.dbf (deleted)

How to find the objects/extents residing on a specific datafile?

This can be useful, for example, if we want to drop a datafile, and it has to be empty in order to do this.

How can we check what objects are there in a datafile?

select distinct a.owner,a.segment_name
from dba_extents a,dba_data_files b
where a.file_id=b.file_id and b.file_name=<your datafile name with path>;

Note: the query above is usually very slow, since it has to full scan a few big fixed tables.