Carlpedia
Skip to end of metadata
Go to start of metadata

This is scripted with /usr/local/sbin/clear-old-disk-copies.sh, which deletes backups more than about a week old.  /usr/local/sbin/clear-old-disk-copies-aggressively.sh or similar can be used to clear more space.

We try to maintain both disk and tape copies to ensure physical separation. The most recent copy on disk for fast restores, older copies on tape only.

 

There may be times and events such as users creating backups in preparation for a server migration that more space than usual gets consumed and recovered for which manual intervention is required. Since this deals with backups, and there may be projects going on, e.g. a server being rebuilt, there may be some knowledge or reason to preserve older copies of backups. Regardless, this page is simply a "work by example" method for cleaning up the file system disk space used by Networker on the hostname=boston if any of the file systems get near their limits of being full. Currently there are three file systems used by Networker on hotname=boston: /e0 , /e1, and /db. The following is an example of how to clean up the /db file system if the need should arise. It is an example only. Some thinking is required for choosing SSIDs and cloneid.

 

NOTE: You must have user=root (sudo) access to perform these steps.

 

      [root@boston ~]# df /e0 /e1 /db   

      Filesystem                    1K-blocks           Used                 Available          Use%   Mounted on   

     /dev/mapper/e0             13672395072     5455070632     7533617304     42%      /e0   

     /dev/mapper/e1             13672395072     2754339996     10234347940    22%     /e1   

     /dev/mapper/vgDB-db    31254167552    21781384716   9472782836      70%     /db

 

but in particular we will look at the directory or file system /db and clean that one up even though there is still ample space.

 

      [root@boston ~]# df -H /db  

     Filesystem                        Size     Used          Available          Use%   Mounted on   

      /dev/mapper/vgDB-db     33T       23T            9.7T                 70%     /db

 

To query Networker for duplicate backups on /db run the following command on hosname=boston and it will ask Networker to look for backups stashed under the /db datastore with more than one copy and then post process the output to look for files that are in gigabyte (GB) size.  It will sort the result by size so that the end of the list shows the largest copies that are currently being stored.

    

       [root@boston ~]# mminfo db -v -r client,ssid,cloneid,volume,sumsize -q 'copies>1' | grep GB | sort -k5 -n   

       lisa.physics.carleton.edu 717869000 1439289288 db 10 GB   

       boston     1774491667  1438947347 db              11 GB   

       gunma.ads.carleton.edu 3586413848 1438930200 db   13 GB   

       boston     1388553099  1438884747 db              15 GB   

       - - - - - - - - - - -   content deleted for brevity sake - - - - - - - - - -

       sqlserver1.ads.carleton.edu 4106506577 1438929233 db 49 GB   

       sqlserver1.ads.carleton.edu 2764333308 1438933244 db 52 GB    

       chicago.its.carleton.edu 2277705259 1438844459 db 53 GB   

      ventnor.its.carleton.edu 4039397841 1438929361 db 77 GB   

      sophia.its.carleton.edu 2193731178 1438756454 db  81 GB   

      feynman.physics.carleton.edu 1187019193 1438677433 db 88 GB   

      antigone.physics.carleton.edu 4022269255 1438577991 db 102 GB   

      lisa.physics.carleton.edu 684315412 1439290132 db 128 GB   

       - - - - - - - - - - -   content deleted for brevity sake - - - - - - - - - -

      storageserver1.ads.carleton.edu 750857472 1438723328 db 748 GB   

      sol2.physics.carleton.edu 768200255 1439288895 db 1326 GB   

      storageserver1.ads.carleton.edu 348138537 1438657577 db 1359 GB   

      fileshare2.ads.carleton.edu 4089346308 1438546180 db 1436 GB   

      storageserver1.ads.carleton.edu 1019157439 1438587838 db 1484 GB   

      storageserver1.ads.carleton.edu 381679396 1438644003 db 1825 GB

 

By looking at the output from the above command you will work your way from the bottom of the list (with the largest size) up.

Issue a remove command (nsrmm) based on the SSID/CloneID. Here we pruned the larger stuff and stopped at hostname=Feynman since Bruce is currently rebuilding that system.

 

      [root@boston ~]# nsrmm -y -d -S 381679396/1438644003   

      [root@boston ~]# nsrmm -y -d -S 1019157439/1438587838   

      [root@boston ~]# nsrmm -y -d -S 4089346308/1438546180   

       [root@boston ~]# nsrmm -y -d -S 348138537/1438657577   

      [root@boston ~]# nsrmm -y -d -S 768200255/1439288895   

      [root@boston ~]# nsrmm -y -d -S 750857472/1438723328   

       - - - - - - - - - - -   content deleted for brevity sake - - - - - - - - - -

      [root@boston ~]# nsrmm -y -d -S 684315412/1439290132   

      [root@boston ~]# nsrmm -y -d -S 4022269255/1438577991   

 

Now that you have issued the command to mark those copies for deletion, you must tell Networker to purge ("nsrim -X") those volumes.

 

       [root@boston ~]# nsrim -X

 

This command will spit out a lot of information and return the command prompt. All this means is that Networker has accepted and scheduled your request to purge these duplicate copies.  If Networker is busy performing backups then it will keep taking backups until it decides to cleanup and process your request. There are ways around this but it is best to let Networker perform its duties. If possible, pick a day and time when Networker is not busy and clean up then. Currently Wednesday mornings are generally light but you can look at SAR data or Zabbix to see when the system is not busy. Otherwise check back later to see if the disk space has been reclaimed and older copies of backups have been purged.

 

   [root@boston ~]# df -H /db   

      Filesystem                       Size      Used    Avail     Use%     Mounted on   

      /dev/mapper/vgDB-db     33T       3.1T      29T      10%       /db

 

Repeat for the other Networker file systems (/e0 and /e1) as required.