Perl Nam-shub's

This page is a repository of Perl one-liners (or < 10-liners) I've found useful. It is a tribute to the power of Perl and a reminder of it's obscurity.

A lot of the one-liners assume that on Win32 something is doing the wild-card expansion for you. One way to achieve this is using the technique explained in the PerlWin32 documentation (look for Wild.pm).


Go through all C++ files and change the name of 'OldClass' to 'NewClass'.
  perl -npi.bak -e "s/OldClass/NewClass/g" *.h *.cpp

Put something on each first line of a set of files.
  perl -npi.bak -e "printf qq(#include \"our_std.h\"\n), $n = $ARGV if $n ne $ARGV" *.cpp

Take a list of files and create a #include for each of them.
  perl -e "print map { qq(#include <$_>\n) } @ARGV;" *.h >> our_std.h

I wanted to compare two debug log files (log1.txt and log2.txt), but they differed on each line, because the format was something like:
<pid>: Debug text
I wanted to get rid of the <pid>: part.
  perl -npi.bak -e "s/^\d+://;" log?.txt

Download the PGN specification from Tim Mann's homepage. IE 4.0 choked on that file.
  perl -mLWP::Simple -e "LWP::Simple::getstore \"http://www.research.digital.com/SRC/personal/Tim_Mann/Standard\", \"standard.txt\";"

What time is it?
    Y:\>perl -e "print scalar localtime"
    Tue Feb 20 17:38:42 2001

Running external programs

Unfortunately the version of CVS (1.10) I use on NT doesn't do any wildcard expansion. It expects the shell will do the expansion, but I don't use Cygnus' Bash or something like that, so I'm out of luck. I could achieve the same using the FOR command in the command-shell, but doing it in Perl makes it easier to add any additional processing I might need.

  perl -e "for (<*.*>) { `cvs add $_`;}"

Manipulating text files.

Installshield .iwz files follow a structure that closely resemples a .ini file. I needed to extract all files from a .iwz file. Each file entry looks like this:

 Group2File1=C:\Program Files\Common Files\Borland Shared\BDE\IDAPI32.DLL

I decided for quick-and-dirty: just take every line that looks like 2File1, split using = as separator and output a comma-separated file iff the file exists (-e).

  perl -n -e "if (/\d+File\d+/) { chomp; @a = split /=/; print $a[0], q(,), $a[1], qq(\n) if -e $a[1];}" app.iwz >app.csv

I started of with a TAB-separated file like this:

  # Origin TAB Destination
  path\file TAB destination-dir
  path\file TAB destination-dir
  path\file TAB destination-dir
  path\file TAB destination-dir

The Nam-Shub discards the first line, splits each line using the TAB, creates the destination directory and copies the file to this directory.

    perl -e "$p = <>; while (<>) { @a = map { chomp; $_; } split/\t/; print qx(md app\\install\\$a[1]); print qx(copy $a[0] app\\install\\$a[1]\\*.*), qq(\n);}" files.txt

Several useful things here:

dir /s /b *.mak | perl -ne "open(IN, $_); print ((grep /__SECURE_API__/, <IN>) ? qq(YES $_) : qq(NO $_))" 
dir /s /b *.mmp | perl -ne "open(IN, $_); unless (grep /__SECURE_API__/, <IN>) { close IN; `echo MACRO __SECURE_API__>>$_`}" 

Almost there. Got lot's of access denied because I didn't checkout from p4:

dir /s /b *.mmp | perl -ne "open(IN, $_); unless (grep /__SECURE_API__/, <IN>) { close IN; `p4 edit $_`; `echo MACRO __SECURE_API__>>$_`}" 

Creating files based on timestamps

Creates a directory based on todays date. Today it created 'd:\temp\2001.02.21'. Using the year.month.day order has a nice property: if you sort directories alphabetically (like dir /s or Perl's sort function does), they'll also get sorted in chronological order. It's nice to store backups that way.

    perl -e "@t = localtime; $s = sprintf q(%d.%02d.%02d), $t[5]+1900, ++$t[4], $t[3]; `md $s`;"

Now we want the previous directory to be unique. We achieve that adding a postfix that creates 'd:\temp\2001.02.21.0', 'd:\temp\2001.02.21.1'. As you see from the script, Perl treats the increment when applied to string in a very special way:

   D:\temp>perl -e "$s = 'aaz'; print ++$s";
   aba

Surprised?

    perl -e "@t = localtime; $s = sprintf q(%d.%02d.%02d), $t[5]+1900, ++$t[4], $t[3]; $ver = qq/0/; $ver++ while -e qq/s.$ver/; `md $s.$ver`;"

If you want the directories to sort appropriately, you will have to use this instead:

    perl -e "@t = localtime; $s = sprintf q(%d.%02d.%02d), $t[5]+1900, ++$t[4], $t[3]; $ver = qq/aaa/; $ver++ while -e qq/s.$ver/; `md $s.$ver`;"

This gives you about 15,000 filenames (25*25*25).

More...

  perl -mFile::Find -e "File::Find::find( sub { print $File::Find::name . \"\n\"; } , '.')"
  perl -e "for (`dir /s /b`) { print }"

Latest...

I just messed up my html source files by running them through HTML-Tidy. That's not supposed to happen. I only run Tidy on the output files, but never on the "master" sources. Main problem is that I don't use the html or body tags as my web-publishing script takes care of those details. I had to strip those tags off:

for /r %i in (*.html) do\
perl -i.bak -e "while (<>) { $t =1 if /^<h1>/; $t = 0 if /<\/body>/; print if $t; }" %i