Commands by dopeman (11)

  • The command (above) will remove any duplicate rows based on the FIRST column of data in an un-sorted file. The '$1' represents a positional parameter. You can change both instances of '$1' in the command to remove duplicates based on a different column, for instance, the third: awk '{ if ($3 in stored_lines) x=1; else print; stored_lines[$3]=1 }' infile.txt > outfile.txt Or you can change it to '$0' to base the removal on the whole row: awk '{ if ($0 in stored_lines) x=1; else print; stored_lines[$0]=1 }' infile.txt > outfile.txt ** Note: I wouldn't use this on a MASSIVE file, unless you're RAM-rich ;) **


    4
    awk '{ if ($1 in stored_lines) x=1; else print; stored_lines[$1]=1 }' infile.txt > outfile.txt
    dopeman · 2010-12-15 17:08:47 5
  • buf myfile.txt This is useful when you are making small but frequent changes to a file. It keeps things organised and clear for another administrator to see what changed and at what time. An overview of changes can be deduced using a simple: ls -ltr


    1
    buf () { filename=$1; filetime=$(date +%Y%m%d_%H%M%S); cp ${filename} ${filename}_${filetime}; }
    dopeman · 2010-12-14 13:19:52 6
  • Essentially the same as funky's alias, but will not traverse filesystems and has nicer formatting. Show Sample Output


    -1
    alias dush="du -xsm * | sort -n | awk '{ printf(\"%4s MB ./\",\$1) ; for (i=1;i<=NF;i++) { if (i>1) printf(\"%s \",\$i) } ; printf(\"\n\") }' | tail"
    dopeman · 2010-07-15 10:38:27 4
  • I have come across a situation in the past where someone has unlinked a file by running an 'rm' command against it while it was still being written to by a running process. The problem manifested itself when a 'df' command showed a filesystem at 100%, but this did not match the total value of a 'du -sk *'. When this happens, the process continues to write to the file but you can no longer see the file on the filesystem. Stopping and starting the process will, more often than not, get rid of the unlinked file, however this is not always possible on a live server. When you are in this situation you can use the 'lsof' command above to get the PID of the process that owns the file (in the sample output this is 23521). Run the following command to see a sym-link to the file (marked as deleted): cd /proc/23521/fd && ls -l Truncate the sym-link to regain your disk space: > /proc/23521/fd/3 I should point out that this is pretty brutal and *could* potentially destabilise your system depending on what process the file belongs to that you are truncating. Show Sample Output


    16
    lsof +L1
    dopeman · 2010-07-14 17:21:01 6
  • This is a handy way to circumvent the "Maximum line length of 2048 exceeded" grep error. Once you have run the above command (or put it in your .bashrc), files can be searched using: lgrep search-string /file/to/search


    1
    lgrep() { string=$1; file=$2; awk -v String=${string} '$0 ~ String' ${file}; }
    dopeman · 2010-01-19 09:42:19 3
  • This does the same thing as many of the 'grep' based alternatives but allows a more finite control over the output. For example if you only wanted the process ID you could change the command: ps -ef | awk '/mingetty/ && !/awk/ {print $2}' If you wanted to kill the returned PID's: ps -ef | awk '/mingetty/ && !/awk/ {print $2}' | xargs -i kill {} Show Sample Output


    1
    ps -ef | awk '/process-name/ && !/awk/ {print}'
    dopeman · 2009-08-19 11:22:09 3
  • This command will copy files and directories from a remote machine to the local one. Ensure you are in the local directory you want to populate with the remote files before running the command. To copy a directory and it's contents, you could: ssh user@host "(cd /path/to/a/directory ; tar cvf - ./targetdir)" | tar xvf - This is especially useful on *nix'es that don't have 'scp' installed by default.


    1
    ssh user@host "(cd /path/to/remote/top/dir ; tar cvf - ./*)" | tar xvf -
    dopeman · 2009-03-31 13:08:45 9
  • This command will show the 20 processes using the most CPU time (hungriest at the bottom). You can see the 20 most memory intensive processes (hungriest at the bottom) by running: ps aux | sort +3n | tail -20 Or, run both: echo "CPU:" && ps aux | sort +2n | tail -20 && echo "Memory:" && ps aux | sort +3n | tail -20


    3
    ps aux | sort +2n | tail -20
    dopeman · 2009-03-31 12:03:34 10
  • This command will tell you the 20 biggest directories starting from your working directory and skips directories on other filesystems. Useful for resolving disk space issues.


    7
    du -xk | sort -n | tail -20
    dopeman · 2009-03-30 11:37:43 8
  • This command will replace all instances of 'foo' with 'bar' in all files in the current working directory and any sub-directories.


    -1
    perl -pi -e 's/foo/bar/g' $(grep -rl foo ./*)
    dopeman · 2009-03-27 17:21:35 10
  • This command will replace all instances of 'foo' with 'bar' in all files in the current working directory.


    -1
    perl -pi -e 's/foo/bar/g' $(grep -l foo ./*)
    dopeman · 2009-03-27 17:18:08 6

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Which processes are listening on a specific port (e.g. port 80)
swap out "80" for your port of interest. Can use port number or named ports e.g. "http"

Which processes are listening on a specific port (e.g. port 80)
swap out "80" for your port of interest. Can use port number or named ports e.g. "http"

Make a zip file with date/time created in the name of the file , zip all sub-directorys
zip -r /tmp/filename-`date +%Y%m%d_%H%M%S`.zip /directory/

Stop and continue processing on a terminal
This will send the ASCII sequence for DC3 to the currently running tty which results in SIGSTOP (19). You can continue with ASCII sequence for DC1 by pressing CTRL+q which results in SIGCONT (18).

Look for IPv4 address in files.
It finds a SNMP OID too :-(

Which processes are listening on a specific port (e.g. port 80)
swap out "80" for your port of interest. Can use port number or named ports e.g. "http"

Display kernel profile of currently executing functions in Solaris.
Lockstat will sample the kernel 977 times per second, and print out the functions that it sees executing on the CPU during the sample. The -s 10 switch tells lockstsat to not only print that function, but also show the call stack (up to 10 deep).

Generate a 18 character password, print the password and sha512 salted hash
Generate a 18 character password from character set a-zA-Z0-9 from /dev/urandom, pipe the output to Python which prints the password on standard out and in crypt sha512 form.

Grab all .flv files from a webpage to the current working directory
I wanted all the 'hidden' .flv files from the http link in the command line; wget seemed appropriate, fed with output from lynx, grep the flv files and the normalised via sed (to remove the numeric bullet). Similar to the 'Grab mp3 files' fu. Replace link with your own, grep arg with something more interesting ;) See here for something along the same lines... http://www.commandlinefu.com/commands/view/1006/grab-mp3-files-from-your-favorite-netcasts-mp3blog-or-sites-that-often-have-good-mp3s Hope you find it useful! Improvements welcome, naturally.

From Vim, run current buffer in python


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: