Category Archives: Storage

Burn-in testing new hard drives

I just bought a new 14 TB Seagate Exos hard drive. I’m using it to replace a 4TB Seagate Desktop drive that I purchased during the Black Friday sales back in 2015, and which has been chugging along faithfully ever since. As of last check, it had recorded 61,747 hours powered on — that’s over 7 years actively spinning!

I’ve occasionally had problems with newly purchased drives, ranging from drives that were dead on arrival or failed quickly to drives that the vendor claimed were new but clearly were refurbished. As a result, I’ve learned to always run new drives through a series of checks and tests before I actually entrust data to them.

Here’s a rundown of what I do:

Check the Warranty

The very first thing to do is to check the drive’s serial number against the manufacturer’s warranty site and make sure that it reports that the drive is still in warranty and has a warranty expiration date that’s in the range you expect. It’s important to note that the warranty on most drives these days is based on the manufacture date and not the sale date, so this is particularly important if you’re buying stock that might have been sitting on a warehouse shelf for a while.

Here are a few quick links to sites where you can check warranty status for drives:

Check the drive with SMART

Virtually all modern storage devices, whether old-style spinning hard drives, newer SSD’s, or even newer NVME storage, support SMART, or Self-Monitoring, Analysis and Reporting Technology. This allows you to pull statistics from and run tests against the drive, and SMART data can often give warning that a drive is in the process of failing.

One of the best tools for accessing SMART information is the smartctl tool, which is part of the smartmontools package. This package may already be installed on your Linux system, but if it’s not, simply install smartmontools. It’s even available for Windows systems.

Next, run a series of smartctl commands against the drive in question. Note that smartctl needs to run as root (either directly or via sudo) to be able to access the drive properly. In the sections below, I show the results of running different smartctl commands to pull specific information on the drive, but you can also use “smartctl -a” to dump all the SMART information at once. For example, assuming that the new device is /dev/sdc, you might run this:

[user@server ~]$ sudo smartctl -a /dev/sdc

This will dump a full report of all the SMART information available for the drive.

Check the SMART Information

You can use “smartctl -i” to check the SMART Information Section. Here’s an example (note that I’ve obscured the serial number of my drive):

[user@server ~]$ sudo smartctl -i /dev/sdc
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.0.15-300.fc37.x86_64] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Exos X16
Device Model:     ST14000NM001G-2KJ103
Serial Number:    XXXXXXXX
LU WWN Device Id: 5 000c50 0e48324fd
Firmware Version: SN03
User Capacity:    14,000,519,643,136 bytes [14.0 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database 7.3/5319
ATA Version is:   ACS-4 (minor revision not indicated)
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sun Jan 1 03:26:30 2023 GMT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

You’ll want to look at the model, size, and serial number of the drive and check that these match against what’s written on the drive and that they’re correct for what you purchased. I have encountered a few manufacturers where the internally reported serial number doesn’t match what’s written on the drive (TEAM Group SSDs come to mind), but most drives will accurately report the serial number, and a mismatch indicates shenanigans.

Check the SMART Health

Next, verify that SMART reports the drive is healthy, using the “smartctl -H” command:

[user@server ~]$ sudo smartctl -H /dev/sdc
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.0.15-300.fc37.x86_64] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

If the SMART overall-health self-assessment reports anything other than PASSED, something’s wrong with the drive and you should consider returning the drive for a replacement.

Check the SMART Attributes

I also check the attributes of the storage device as reported by SMART, as these can identify potential shenanigans with refurbished drives being re-sold as new. This is done via “smartctl -A” (note that that’s a capital A):

[user@server ~]$ sudo smartctl -A /dev/sdc
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.0.15-300.fc37.x86_64] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   081   065   044    Pre-fail  Always       -       112398000
  3 Spin_Up_Time            0x0003   096   096   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       4
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   074   060   045    Pre-fail  Always       -       24504019
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       72
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       4
 18 Head_Health             0x000b   100   100   050    Pre-fail  Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   071   050   040    Old_age   Always       -       29 (Min/Max 25/36)
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       1
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       21
194 Temperature_Celsius     0x0022   029   040   000    Old_age   Always       -       29 (0 22 0 0 0)
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
200 Pressure_Limit          0x0023   100   100   001    Pre-fail  Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       65h+45m+02.089s
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       35912985792
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       27344671168

There are several attributes that you should pay close attention to in here: The Power_On_Hours, Head_Flying_Hours, Total_LBAs_Written, and Total_LBAs_Read attributes should all be close to zero on a new drive, as these attributes should all have been zero when you unpacked the drive and should only reflect any activity that has happened since you installed the drive.

If these attributes show anything else, it probably indicates that you’ve been sold a refurbished drive, and you should consider returning it. This is especially true if these numbers are high.

Check SMART Selftest Logs

I also check the logs of any self tests that have been run against the drive by SMART using the “smartctl -l selftest” command:

[user@server ~]$ sudo smartctl -l selftest /dev/sdc
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.0.15-300.fc37.x86_64] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1 Extended offline    Completed without error       00%        18         -
# 2 Short offline       Completed without error       00%         0         -

A new drive should have no self test records. If this command shows self tests on the device, something’s funky. (The tests that you see above are from after I ran my own tests on a new drive.)

This particular command came in really handy at one point when a shady vendor sold me some refurbished drives as new. The SMART attributes showed expected near zeroes for the Power_On_Hours and Head_Flying_Hours attributes, but the SMART self test logs showed that a short offline test had been run — and failed — at a lifetime hours that indicated that the drive had been actively in use for more than two years. Clearly that was a failed drive where the vendor had somehow cleared out the SMART attributes but neglected to clear out the self test logs. Needless to say, I wasn’t surprised when the drive immediately started showing bad sectors and returned my entire order immediately.

Run SMART Self Tests

The smartctl command can also initiate drive self tests and return the results. I always run both a short and a long self test before I put the drive into active use. A short self test will typically complete in just 1-3 minutes, and just does some minimal functionality testing of the drive. A long self test will run for many hours; depending on the type and size of the drive, it could easily take 12-24 hours to run, as it exercises the entire drive. These commands are run with the “smartctl -t short” and “smartctl -t long” commands:

[user@server ~]$ sudo smartctl -t short /dev/sdc
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.0.15-300.fc37.x86_64] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
Sending command: "Execute SMART Short self-test routine immediately in off-line mode".
Drive command "Execute SMART Short self-test routine immediately in off-line mode" successful.
Testing has begun.
Please wait 1 minutes for test to complete.
Test will complete after Sun Jan 1 04:13:52 2023 GMT
Use smartctl -X to abort test.

As you can see, smartctl will tell you approximately how long the test will take to complete, and will tell you when you can expect to check back to see the results. You can use the “smartctl -l selftest” command as shown above to see the results of the test. If the test has not yet completed, that command will tell you how much of the test is remaining to be run.

Run full drive write and read tests

The final set of tests that I do is to write data to the entire drive, and then read data from the entire drive, to ensure that there are no obvious issues with any section of the drive. Depending on your drive speed and size, this can again be a very lengthy process (e.g., for the Exos 14 TB drive that prompted this post, it took about 18 hours to do each).

The “dd” command on Unix/Linux systems is used to convert and copy files, but since “cc” was already used for the C compiler, the authors named it “dd” instead (although some people will tell you that “dd” stands for “data destruction” or “data duplicator” instead).

Write to the entire drive

I use the following dd command to write data to the entire drive:

[user@server ~]$ sudo dd if=/dev/zero of=/dev/sdc bs=1M &
[1] 1562764

The if= option specifies that the input file is /dev/zero. This is a special device on Unix/Linux systems that will just repeatedly spit out null characters (ASCII character zero). The of= option specifies that the output file is the disk that I’m testing. Make sure to get the right disk, as dd will happily overwrite all data on the disk with no verification and no way to undo what you’re doing! The bs= option specifies that I’ll be using a block size of 1 mebibyte (1,048,576 bytes). So basically, this will just sequentially write 1MiB blocks of null characters to the disk until all the space on the disk has been written to.

The ampersand at the end of the command tells the OS to run it in the background, and the “[1] 1562764” response that you see above indicates that this is job 1, with PID (process ID) 1562764. Your own job number and PID may vary.

If you want to monitor the progress of the command, you can do so by sending a USR1 signal to the command using the “kill” command. E.g.:

[user@server ~]$ kill -USR1 %1
[user@server ~]$ 43066+0 records in
43066+0 records out
45157974016 bytes (45 GB, 42 GiB) copied, 301.514 s, 150 MB/s

In the example above, I’m sending a USR1 signal to job one (designated as %1). I could have provided the PID instead, but the job number is usually easier to remember. You’ll also note that the formatting looks a little weird, because you will usually get your next prompt before the process receives the USR1 signal and spits out the current status information.

This command will eventually complete with an error indicating that there’s no space left on the device, and providing statistics for the entire run. It’ll look something like this (although you may note that to capture the info below I used a different device):

dd: error writing '/dev/sdi2': No space left on device
2049+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 4.52105 s, 475 MB/s

Read from the entire drive

Once the full disk write is done, I perform a full disk read. This is also done using the dd command, as follows:

[user@server ~]$ sudo dd if=/dev/sdc of=/dev/null bs=1M &
[1] 1562764

In this case, the input file (if) is the disk you’re reading from, and the output file (of) is /dev/null, which is a special device that just discards anything that’s written to it. Once again, you can use “kill -USR1” to get interim progress statistics, and at the end it will spit out a message with final completion info (this time without an error message).

A final check with smartctl

Once I’ve finished the tests above, I run one final “smartctl -H” and one final “smartctl -A” and make sure that everything looks good. In particular, I’m looking for that “PASSED” and verifying that nothing looks wonky with any of the attributes (e.g., bad sector counts, etc.).

Start using the drive

Once a drive finishes all of the tests above, it’s ready to go. This isn’t a fast process, so it requires a fair bit of patience to get through it rather than just using the drive immediately. It only takes one experience losing a bunch of data that you’ve just put onto a new drive, though, before you see the value of this process.

Do you have a similar process that you follow? Any useful additional checks you like to run? Let me know!

 

 

 

 

 

Systemd Sucks… Up Your Disk Space

Over the last several years, the advent of systemd has been somewhat controversial in the Linux world. While it undeniably has a lot of features that Linux has been lacking for some time, many people think that it has gone too far: They think it insinuates itself into places it shouldn’t, unnecessarily replaces functionality that didn’t need to be replaced, introduces bloat and performance issues, and more.

I can see both sides of the argument and I’m personally somewhat ambivalent about the whole thing. What I can tell you, though, is that the default configuration in Fedora has a tendency to suck up a lot of disk space.

Huge… tracts of land

The /var/log/journal directory is where systemd’s journal daemon stores log files. On my Fedora systems, I’ve found that this directory has a tendency to grow quite large over time. If left to its own devices, it will often end up using many gigabytes of storage.

Now that may not sound like the end of the world. After all, what are a few gigabytes of log files on a modern system that has multiple terabytes of storage?

Like the whole systemd argument, you can take two different perspectives on this:

Perspective 1: Disk is cheap, and if I’m not using that disk space for anything else, why not go ahead and fill it up with journal daemon logs?

Perspective 2: Why would I want to keep lots of journal daemon logs on my system that I probably won’t ever use?

I tend to take the second perspective. In my case, this is compounded by several other factors:

  1. I keep my /var/log directory in my root filesystem and deliberately keep that small (20GB), so I really don’t want it to fill up with unnecessary log files.
  2. I back up my entire root filesystem nightly to local storage and replicate that to remote storage. Backing up these log files takes unnecessary time, bandwidth, and storage space.
  3. I have a dozen or so KVM virtual machines and LXC containers on my main server. If I let the journal daemon logs on all of these run amok that space really starts to add up.

Quick and Dirty Cleanup

If you’re just looking to do some quick disk space reclamation on your system, you can do this with the the ‘journalctl’ command:

journalctl --vacuum-size=[size]

Quick note: Everything in this post requires root privileges. For simplicity, I show all the commands being run from a root shell. If you’re not running in a root shell, you’ll need to preface each command with ‘sudo’ or an equivalent to run the command with root privileges.

When using the journalctl command above, you specify what size you want the systemd journal log files to take up, and it will try to reduce the journal log files to that size.

Note that I say try. This command can’t do anything to log files that are currently open on the system, and various other factors may reduce its ability to actually reduce the total amount of space used.

Here’s an example I ran within an LXC container:

[root@server ~]# du -sh /var/log/journal
168M /var/log/journal
[root@server ~]# journalctl --vacuum-size=10M
Deleted archived journal /var/log/journal/ac9ff276839a4b429790191f8abb21c1/system@f24253741e8c412a9fe94a48257c2b35-0000000000000001-00055dcc288c8a73.journal (16.0M).
Deleted archived journal /var/log/journal/ac9ff276839a4b429790191f8abb21c1/user-2000@b54e732b7ea1430c95020d6a6553dccb-0000000000000f7b-00055dcef80287ee.journal (8.0M).
Deleted archived journal /var/log/journal/ac9ff276839a4b429790191f8abb21c1/system@f24253741e8c412a9fe94a48257c2b35-0000000000002c74-00056030edf54f82.journal (8.0M).
Deleted archived journal /var/log/journal/ac9ff276839a4b429790191f8abb21c1/user-2000@b54e732b7ea1430c95020d6a6553dccb-0000000000002d0e-00056056271d449c.journal (8.0M).
Deleted archived journal /var/log/journal/ac9ff276839a4b429790191f8abb21c1/system@f24253741e8c412a9fe94a48257c2b35-0000000000003d92-00056295d1dfc0cb.journal (8.0M).
Deleted archived journal /var/log/journal/ac9ff276839a4b429790191f8abb21c1/user-2000@b54e732b7ea1430c95020d6a6553dccb-0000000000003e4d-000562bca405ac7c.journal (8.0M).
Deleted archived journal /var/log/journal/ac9ff276839a4b429790191f8abb21c1/system@000562f8e6bc4730-4bc5e6409eab3024.journal~ (8.0M).
Deleted archived journal /var/log/journal/ac9ff276839a4b429790191f8abb21c1/system@866bd5425da84c0387e801f0d9f0dbe0-0000000000000001-0005630ace84e3a6.journal (8.0M).
Deleted archived journal /var/log/journal/ac9ff276839a4b429790191f8abb21c1/system@0005630ace895afa-db4ba70439580a20.journal~ (8.0M).
Deleted archived journal /var/log/journal/ac9ff276839a4b429790191f8abb21c1/user-2000@b54e732b7ea1430c95020d6a6553dccb-0000000000004f97-00056526bc67d197.journal (8.0M).
Deleted archived journal /var/log/journal/ac9ff276839a4b429790191f8abb21c1/system@000567364f741221-fef3cfcfe59c68bc.journal~ (8.0M).
Deleted archived journal /var/log/journal/ac9ff276839a4b429790191f8abb21c1/system@00056779411bc792-2224320b49ef5929.journal~ (8.0M).
Deleted archived journal /var/log/journal/ac9ff276839a4b429790191f8abb21c1/system@7e3dc17225834c50ab9cbec8c0551dc4-0000000000000001-000567794116176d.journal (8.0M).
Deleted archived journal /var/log/journal/ac9ff276839a4b429790191f8abb21c1/user-2000@b54e732b7ea1430c95020d6a6553dccb-0000000000006c42-000567adbbc43bff.journal (8.0M).
Deleted archived journal /var/log/journal/ac9ff276839a4b429790191f8abb21c1/system@000567ef1af7e427-3c61c0089c605c91.journal~ (8.0M).
Deleted archived journal /var/log/journal/ac9ff276839a4b429790191f8abb21c1/system@ae06f5eac535470a823d126d23143e57-0000000000000001-000569a28b47d93a.journal (8.0M).
Vacuuming done, freed 136.0M of archived journals from /var/log/journal/ac9ff276839a4b429790191f8abb21c1.
[root@server ~]# du -sh /var/log/journal
32M /var/log/journal

As you can see, while this did reduce the logs significantly (from 168M to 32M), it was unable to reduce them down to the 10M that I requested.

It’s also really important to remember that cleaning up log files with journalctl is not a permanent solution. Once you clean them up they’ll just start growing again.

The Permanent Fix

The way to permanently fix the problem is to update the journal daemon configuration to specify a maximum retention size. The configuration file to edit is /etc/systemd/journald.conf. On a Fedora system the default configuration file looks something like this:

# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See journald.conf(5) for details.

[Journal]
#Storage=auto
#Compress=yes
#Seal=yes
#SplitMode=uid
#SyncIntervalSec=5m
#RateLimitIntervalSec=30s
#RateLimitBurst=1000
#SystemMaxUse=
#SystemKeepFree=
#SystemMaxFileSize=
#SystemMaxFiles=100
#RuntimeMaxUse=
#RuntimeKeepFree=
#RuntimeMaxFileSize=
#RuntimeMaxFiles=100
#MaxRetentionSec=
#MaxFileSec=1month
#ForwardToSyslog=no
#ForwardToKMsg=no
#ForwardToConsole=no
#ForwardToWall=yes
#TTYPath=/dev/console
#MaxLevelStore=debug
#MaxLevelSyslog=debug
#MaxLevelKMsg=notice
#MaxLevelConsole=info
#MaxLevelWall=emerg

The key line is “#SystemMaxUse=”. To specify the maximum amount of space you want the journal daemon log files to use, uncomment that line by removing the hash mark (‘#’) at the start of the line and specify the amount of space after the equals (‘=’) at the end of the line. For example:

SystemMaxUse=10M

You can use standard unit designators like M for megabytes or G for gigabytes.

Once you’ve updated this configuration file, it will take effect the next time the journal deamon restarts (typically upon system reboot). To make it take effect immediately, simply tell systemd to restart the journal daemon using the following command:

systemctl restart systemd-journald

Note that if you’ve specified a very small size, like the example above, this still might not shrink the logs down to the size specified. For example:

[root@server ~]# systemctl restart systemd-journald
[root@server ~]# du -sh /var/log/journal
32M /var/log/journal

AsĀ  you can see, we still haven’t reduced the log files down below the maximum size we specified. To do so, you have to stop the journal daemon, completely remove the existing log files, and then restart the journal daemon:

[root@server ~]# systemctl stop systemd-journald
Warning: Stopping systemd-journald.service, but it can still be activated by:
systemd-journald.socket
systemd-journald-audit.socket
systemd-journald-dev-log.socket
[root@server ~]# rm -rf /var/log/journal/*
[root@server ~]# systemctl start systemd-journald
[root@server ~]# du -sh /var/log/journal
1.3M /var/log/journal

Ta-da! Utilization is now down to a minimal amount, and as the log grows the journal daemon should keep it to down to a size less than the maximum amount you’ve specified.

Pigs In Space

Today I noticed that my root filesystem has a little less free space than I would really like it to have, so I decided to do a bit of cleanup…

Finding The Space Hogs

I’m a bit old-fashioned, so I still tend to do this sort of thing from the command line rather than using fancy GUI tools. Over the years, this has served me well, because I can get things done even if I only have terminal access to a system (e.g., via ssh or a console) and only have access to simple standard commands on a system.

One quick trick is that you can easily find the top ten space hogs within any given directory using the following command:

du -k -s -x * | sort -n -r | head

Let me break down what this does.

The ‘du’ command provides you with information on disk usage. Its default behavior is to give you a recursive listing of how much space is used beneath all directories from the current directory on down.

The ‘-k’ option tells du to report all utilization numbers using kilobytes. This will be important in a moment when we sort the list, as most modern versions of du will default to using the most appropriate unit. it’s much easier for a generic sort program to automatically sort 1200 and 326 than it is to sort 1.2G and 326M.

The ‘-s’ option tells du to only report a sum for each file or directory specified, rather than also recursively reporting on all of the subdirectories underneath.

The ‘-x’ option tells du to stay within the same filesystem. This is important if you’re exploring a filesystem that might have other filesystems mounted beneath it, as it tells du not to include information from those other filesystems. For instance, if you’re trying to clean up /var and /var/lib/lxc is a different filesystem mounted from its own separate storage, you don’t want to include the stuff under /var/lib/lxc in your numbers.

Finally, we specify an asterisk wildcard (‘*’) to tell du to spit out stats for each file and directory within the current directory. (Note that if you have hidden files or directories — files that begin with a ‘.’ in the Linux/Unix world — the asterisk will ignore those by default in most shells.)

Next, we pipe the output of the du command to the ‘sort’ command, which does pretty much exactly what it sounds like it should do.

The ‘-n’ option tells sort to do a numeric sort rather than a string sort. By default sort will use a string sort, which would yield results like “1, 10, 11, 2, 3” instead of “1, 2, 3, 10, 11”.

The ‘-r’ option tells sort that we want it to output results in reverse order (i.e., last to first, or biggest to smallest).

Finally, we pipe the sorted output to the ‘head’ command. The head command will spit out the “head,” or the first few lines of a file. By default, head will spit out the first ten lines of a file.

The net result is that this command gives the top ten space hogs in the current directory.

On my system I know that it’s almost invariably the /var filesystem that sucks up space on my root filesystem, so I started there:

[root@server ~]# cd /var
[root@server var]# du -k -s -x * | sort -n -r | head
3193804 lib
2859856 spool
1386684 log
195932 cache
108 www
88 tmp
12 kerberos
12 db
8 empty
4 yp

This tells me that I need to check out /var/lib, /var/spool, and /var/log. The /var/cache directory might yield a little space if cleaned up, but probably not a lot, and everything else is pretty much beneath my notice.

From here, I basically just change directory to each of the top hitters and repeat the process, cleaning up unnecessary files as I go along.

Hard Disk Insanity

The other day I found myself looking at an 8GB micro-SD card and marveling at how much storage has shrunk over the years. That in turn got me thinking back to the first computer I owned that had a hard drive: It was a Tandon (not Tandy) clone of the original IBM XT, with an 8088 processor. It looked something like this (apologies for the image quality — it was the only one I could find, and I’m guessing it was scanned from an old newspaper advertisement):

Tandon Computer

Image courtesy of The Probert Encyclopaedia.

This particular computer had a hard drive that was, to me at the time, unfathomably huge: Ten whole megabytes! Megabytes?!? That was more space than thirty floppy disks, and I didn’t have anything for my PC at the time that needed more than a single floppy!

Needless to say, the feeling that ten megabytes was a lot of space didn’t last long. Today my home file server has eleven terabytes of disk space. It would take 1.1 million of my first hard drive to provide that much storage. That got me thinking… Exactly how much space would 1.1 million of those ten megabyte drives take up?

Well, that first hard drive looked something like this:

ST-225

Image courtesy of Computer History Museum.

That drive is actually a 21 megabyte Seagate drive, whereas mine was a ten megabyte off-brand drive, but the size is about right — roughly 5.75″ wide, 1.63″ high, and 8″ deep, for a total of about 75 cubic inches. For comparison to modern equipment, it’s about the size of an older CD-ROM or DVD-ROM drive.

1.1 million of those drives would take up about 82,478,000 cubic inches of space (or 47,730 cubic feet, or 1,768 cubic yards, or 1,352 cubic meters). That’s a lot of space. But how do you put it in terms that are easy to visualize?

Well, how about cargo containers? You know… the type you might see on a train, or on the back of a semi-truck, or stacked up on a boat or at a port?

According to Wolfram Alpha, it would take 25 forty-foot cargo containers to hold those hard drives. So picture 25 of these (actually, I’m pretty sure that’s a twenty-foot container, so picture something twice as big):

TEU

Image courtesy of Wikimedia Commons.

Alternately, according to Wolfram Alpha, this is roughly the equivalent of 0.73 times the cargo capacity of a Boeing 747 large cargo freighter, or 0.54 times the volume of an Olympic sized swimming pool.

And my current file server stores that same amount of data on six 3.5″ hard drives.

Whoah.


References: