Commands by mechmind (7)

  • It's useful mostly for your custom scripts, which running on specific host and tired on ssh'ing every time when you need one simple command (i use it for update remote apt repository, when new package have to be downloaded from another host). Don't forget to set up authorization by keys, for maximum comfort. Show Sample Output


    -3
    echo -e '#!/bin/bash\nssh remote-user@remote-host $0 "$@"' >> /usr/local/bin/ssh-rpc; chmod +x /usr/local/bin/ssh-rpc; ln -s hostname /usr/local/bin/ssh-rpc; hostname
    mechmind · 2011-12-28 17:43:34 8
  • There are a lot of commands, which invokes your player at specified time. But I prefer not to jump from by bed, when alarm start to play. Instead, this script increases volume of mpd over time, which much more pleasant when you just woke up :)


    7
    at 8:30 <<<'mpc volume 20; mpc play; for i in `seq 1 16`; do sleep 2; mpc volume +5; done'
    mechmind · 2011-11-30 17:51:27 4
  • this oneliner uses make and it's jobserver for parallel execution of your script. The '-j' flag for make defines number of subprocesses to launch, '-f' tells make use stdin instead of Makefile. Also make have neat flag '-l', which "Specifies that no new jobs (commands) should be started if there are others jobs running and the load is at least load (a floating-point number)." Also you can use plain Makefile, for better readability: targets = $(subst .png,.jpg,$(wildcard *.png)) (targets): echo convert $(subst .jpg,.png,$@) $@ all : $(targets)


    5
    echo -n 'targets = $(subst .png,.jpg,$(wildcard *.png))\n$(targets):\n convert $(subst .jpg,.png,$@) $@ \nall : $(targets)' | make -j 4 -f - all
    mechmind · 2010-07-15 07:19:17 7
  • USAGE: $ sudor your command This command uses a dirty hack with history, so be sure you not turned it off. WARNING! This command behavior differ from other commands. It more like text macro, so you shouldn't use it in subshells, non-interactive sessions, other functions/aliases and so on. You shouldn't pipe into sudor (any string that prefixes sudor will be removed), but if you really want, use this commands: proceed_sudo () { sudor_command="`HISTTIMEFORMAT=\"\" history 1 | sed -r -e 's/^.*?sudor//' -e 's/\"/\\\"/g'`" ; pre_sudor_command="`history 1 | cut -d ' ' -f 5- | sed -r -e 's/sudor.*$//' -e 's/\"/\\\"/g'`"; if [ -n "${pre_sudor_command/ */}" ] ; then eval "${pre_sudor_command%| *}" | sudo sh -c "$sudor_command"; else sudo sh -c "$sudor_command" ;fi ;}; alias sudor="proceed_sudo # "


    3
    proceed_sudo () { sudor_command="`HISTTIMEFORMAT=\"\" history 1 | sed -r -e 's/^.*?sudor//' -e 's/\"/\\\"/g'`" ; sudo sh -c "$sudor_command"; }; alias sudor="proceed_sudo # "
    mechmind · 2010-06-29 14:56:29 5
  • For this hack you need following function: finit() { count=$#; current=1; for i in "$@" ; do echo $current $count; echo $i; current=$((current + 1)); done; } and alias: alias fnext='read cur total && echo -n "[$cur/$total] " && read' Inspired by CMake progress counters. Show Sample Output


    2
    finit "1 2 3" 3 2 1 | while fnext i ; do echo $i; done;
    mechmind · 2010-06-17 10:20:49 3
  • When you start screen as `ssh-agent screen`, agent will die after detatch. If you don't want to take care about files when stored agent's pid/socket/etc, you have to use this command.


    4
    eval `ssh-agent`; screen
    mechmind · 2010-03-07 14:58:54 3
  • With this form you dont need to cut out target directory using grep/sed/etc.


    4
    (ls; mkdir subdir; echo subdir) | xargs mv
    mechmind · 2009-11-08 11:40:55 13

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Do a search-and-replace in a file after making a backup

floating point operations in shell scripts
allows you to use floating point operations in shell scripts

Double your disk read performance in a single command
(WARN) This will absolutely not work on all systems, unless you're running large, high speed, hardware RAID arrays. For example, systems using Dell PERC 5/i SAS/SATA arrays. If you have a hardware RAID array, try it. It certainly wont hurt. You may be can test the speed disk with some large file in your system, before and after using this: $ time dd if=/tmp/disk.iso of=/dev/null bs=256k To know the value of block device parameter known as readahead. $ blockdev --getra /dev/sdb And set the a value 1024, 2048, 4096, 8192, and maybe 16384... it really depends on the number of hard disks, their speed, your RAID controller, etc. (see sample)

Shell function to create a menu of items which may be inserted into the X paste buffer.
The function will take a comma separated list of items to be 'selected' by xsel -i: $ smenu "First item to paste,Paste me #2,Third menu item" You will then be prompted to choose one of the menu items. After you choose, you will be able to paste the string by clicking the middle mouse button. The menu will keep prompting you to choose menu items until you break out with Control-C.

Make changes in any profile available immediately/Change to default group
Changes your group to the default group, has the same effect as sourcing your profile/rc file (in any shell) or logging out and back in again.

FAST Search and Replace for Strings in all Files in Directory
I needed a way to search all files in a web directory that contained a certain string, and replace that string with another string. In the example, I am searching for "askapache" and replacing that string with "htaccess". I wanted this to happen as a cron job, and it was important that this happened as fast as possible while at the same time not hogging the CPU since the machine is a server. So this script uses the nice command to run the sh shell with the command, which makes the whole thing run with priority 19, meaning it won't hog CPU processing. And the -P5 option to the xargs command means it will run 5 separate grep and sed processes simultaneously, so this is much much faster than running a single grep or sed. You may want to do -P0 which is unlimited if you aren't worried about too many processes or if you don't have to deal with process killers in the bg. Also, the -m1 command to grep means stop grepping this file for matches after the first match, which also saves time.

exit if another instance is running

Gets the X11 Screen resolution
Requires xrandr, grep and, sed.

show git commit history

list files recursively by size


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: