General description of the
program:
This program is an experiment to evaluate using infix
boolean operations as a heuristic to determine the relevance
of text files in electronic literature searches. The
operators supported are, "&" for logical "and," "|" for
logical "or," and "!" for logical "not." Parenthesis are
used as grouping operators, and "partial key" searches are
fully supported, (meaning that the words can be
abbreviated.) For example, the command:
rel "(((these & those) | (them & us)) ! we)" file1 file2 ...
would print a list of filenames that contain either the
words "these" and "those", or "them" and "us", but doesn't
contain the word "we" from the list of filenames, file1,
file2, ... The order of the printed file names is in order
of relevance, where relevance is determined by the number of
incidences of the words "these", "those", "them", and "us",
in each file. The general concept is to "narrow down" the
number of files to be browsed when doing electronic
literature searches for specific words and phrases in a
group of files using a command similar to:
more `rel "(((these & those) | (them & us)) ! we)" file1 file2`
Although regular expressions were supported in the
prototype versions of the program, the capability was
removed in the release versions for reasons of syntactical
formality, for example, the command:
rel "((john & conover) & (joh.*over))" files
has a logical contradiction since the first group
specifies all files which contain "john" any place and
"conover" anyplace in files, and the second grouping
specifies all files that contain "john" followed by
"conover". If the last group of operators takes precedence,
the first is redundant. Additionally, it is not clear
whether wild card expressions should span the scope multiple
records in a literature search, (which the first group of
operators in this example does,) or exactly what a wild card
expression that spans multiple records means, ie., how many
records are to be spanned, without writing a string of EOL's
in the infix expression. Since the two groups of operators
in this example are very close, operationally, (at least for
practical purposes,) it was decided that support of regular
expressions should be abandoned, and such operations left to
the grep(1) suite.
Comparative benchmarks of search
algorithm:
The benchmarks were run on a System V, rel. 4.2 machine,
(20Mhz 386 with an 18ms. ESDI drive,) and searched the
catman directory, (consisting of 782 catman files, totaling
6.8 MB,) which was searched for either one or two 9
character words that did not exist in any file, ie., there
could be no matches found. The comparison was between the
standard egrep(1), agrep(1), and rel(1). (Agrep is a very
fast regular expression search program, and is available by
anonymous ftp from cs.arizona.edu, IP 192.12.69.5)
for complex search patterns (after cd'ing to the
cat1 directory:)
the command "egrep 'abcdefwxy|wxyabcdef' *" took
74.93 seconds
the command "agrep 'abcdefwxy,wwxyabcdef' *" took
72.93 seconds
the command "rel 'abcdefwxy|wxyabcdef' *" took
51.95 seconds
for simple search patterns (after cd'ing to the
cat1 directory:)
the command "egrep 'abcdefwxy' *" took 73.91
seconds
the command "agrep 'abcdefwxy' *" took 25.87
seconds
the command "rel 'abcdefwxy' *" took 43.68
seconds
For simple search patterns, agrep(1) is
significantly faster, and for complex search patterns,
rel(1) is slightly faster.
Applicability:
Applicability of rel varies on complexity of search, size
of database, speed of host environment, etc., however, as
some general guidelines:
For text files with a total size of less than 5 MB,
rel, and standard egrep(1) queries of the text files will
probably prove adequate.
For text files with a total size of 5 MB to 50 MB,
qt seems adequate for most queries. The significant issue
is that, although the retrieval execution times are
probably adequate with qt, the database write times are
not impressive. Qt is listed in "Related information
retrieval software:," below.
For text files with a total size that is larger
than 50 MB, or where concurrency is an issue, it would be
appropriate to consider one of the other alternatives
listed in "Related information retrieval software:,"
below.
Extensibility:
The source was written with extensibility as an issue. To
alter character transliterations, see uppercase.c for
details. For enhancements to phrase searching and
hyphenation suggestions, see translit.c.
It is possible to "weight" the relevance determination of
documents that are composed in one of the standardized
general markup languages, like TeX/LaTeX, or SGML. The
"weight" of the relevance of search matches depends on where
the words are found in the structure of the document, for
example, if the search was for "numerical" and "methods,"
\chapter{Numerical Methods} would be weighted "stronger"
than if the words were found in \section{Numerical Methods},
which in turn would be weighted "stronger" than if the words
were found in a paragraph. This would permit relevance of a
document to be determined by how author structured the
document. See eval.c for suggestions.
The list of identifiers in the search argument can be
printed to stdio, possibly preceeded by a '+' character and
separated by '|' characters to make an egrep(1) compatible
search argument, which could, conceivably, be used as the
search argument in a browser so that something like:
"browse `rel arg directory'"
would automatically search the directory for arg, load
the files into the browser, and skip to the first instance
of an identifier, with one button scanning to the next
instance, and so on. See postfix.c for suggestions.
The source architecture is very modularized to facilitate
adapting the program to different environments and
applications, for example, a "mailbot" can be constructed by
eliminating searchpath.c, and constructing a list of postfix
stacks, with perhaps an email address element added to each
postfix stack, in such a manner that the program could be
used to scan incoming mail, and if the mail was relevant to
any postfix criteria, it would be forwarded to the
recipient.
The program is capable of running as a wide area,
distributed, full text information retrieval system. A
possible scenario would be to distribute a large database in
many systems that are internetworked together, presumably
via the Unix inet facility, with each system running a copy
of the program. Queries would be submitted to the systems,
and the systems would return individual records containing
the count of matches to the query, and the file name
containing the matches, perhaps with the machine name, in
such a manner that the records could be sorted on the "count
field," and a network wide "browser" could be used to view
the documents, or a script could be made to use the "r
suite" to transfer the documents into the local
machine. Obviously, the queries would be run in parallel on
the machines in the network-concurrency would not be an
issue. See the function, main(), below, for
suggestions.
References:
"Information Retrieval, Data Structures &
Algorithms," William B. Frakes, Ricardo Baeza-Yates,
Editors, Prentice Hall, Englewood Cliffs, New Jersey
07632, 1992, ISBN 0-13-463837-9.
The sources for the many of the algorithms presented in
1) are available by ftp, ftp://ftp.vt.edu:/pub/reuse/ircode.tar.Z
"Text Information Retrieval Systems," Charles
T. Meadow, Academic Press, Inc, San Diego, 1992, ISBN
0-12-487410-X.
"Full Text Databases," Carol Tenopir, Jung Soon Ro,
Greenwood Press, New York, 1990, ISBN
0-313-26303-5.
"Text and Context, Document Processing and
Storage," Susan Jones, Springer-Verlag, New York, 1991,
ISBN 0-387-19604-8.
ftp ftp://think.com:/wais/wais-corporate-paper.text
ftp ftp://cs.toronto.edu:/pub/lq-text.README.1.10
Related information retrieval
software:
Wais, available by ftp, ftp://think.com:/wais/wais-8-b5.1.tar.Z.
Lq-text, available by ftp, ftp://cs.toronto.edu:/pub/lq-text1.10.tar.Z.
Qt, available by ftp, ftp://ftp.uu.net:/usenet/comp.sources/unix/volume27/.
The general program strategy:
Translate the the infix notation of the first
non-switch argument specified on the command line into a
postfix notation list.
Compile each token in the postfix notation list,
from 1), into a Boyer-Moore-Horspool-Sunday compatible
jump table.
Recursively descend into all directories that are
listed on the remainder of the command line, searching
each file in each directory, using the
Boyer-Moore-Horspool-Sunday algorithm, for the counts of
incidences of each word in the postfix notation list-at
the conclusion of the search of each file, evaluate the
postfix notation list to determine the relevance of the
file, and if the relevance is greater than zero, add the
filename and relevance value to the relevance
list.
Quick sort the relevance list from 3), on the
relevance values, and print the filename of each element
in the relevance list.
Module descriptions:
The module uppercase.c constructs an array of
MAX_ALPHABET_SIZE characters, in such a manner that the
implicit index of any element contains the toupper() of
the offset into the array of the specific index value,
(ie., it is a look up table for uppercase characters,) and
is called from main() for initialization in rel.c. The
arrays use is to make a locale specific, fast, uppercase
character translator, and is used in lexicon.c and
searchfile.c to translate the first argument of the
command line, and file data, respectively, to uppercase
characters.
- note: care must be exercised when using this array
in systems where the native type of char is signed, for
example:
signed char ch;
unsigned char cu;
cu = uppercase[ch];
- will not give the desired results, since ch indexed
a negative section of the array, (which does not
exist.). Particularly meticulous usage of lint is
advisable.
See uppercase.c and translit.c for suggestions in
implementing hyphenation and phrase searching
strategies.
The module translit.c translates all of the
characters in an array, using the array established in
uppercase.c. See translit.c and uppercase.c for
suggestions in implementing hyphenation and phrase
searching strategies.
The module lexicon.c parses the first argument of
the command line into tokens, and is repetitively called
by postfix.c for each token in the first argument of the
command line. Lexicon.c uses a simple state machine to
parse the tokens from the argument.
The module posfix.c translates the first argument
of the command line from infix notation to a postfix
notation list, and is called from main() in rel.c. Syntax
of the infix expression is also verified in this
module.
The module bmhsearch.c contains all of the
Boyer-Moore-Horspool-Sunday (BMH) string search functions,
including the bmhcompile_postfix() function which is
called from main() in rel.c, to compile each token in the
postfix notation list into a jump table, and the
bmhsearch_list () function which is called repetitively to
search each file in searchfile.c. See the bmhsearech.c
module for a complete description of the assembled data
structures.
The module searchpath.c is a POSIX compliant,
recursive descent directory and file listing function that
is called from main() in rel.c to search files using the
module in searchfile.c.
The module searchfile.c is repetitively called from
searchpath() in searchpath.c to search each file found in
5), using the BMH string search functions in
bmhsearch.c. Searchfile.c uses POSIX compliant functions
to open, lock, read, and close each file. The files are
read locked for compatability with those systems that
write lock files during write operations with utilities,
for example, like vi(1). This provides concurrency control
in a multi user environment. Searchfile.c uses fcntl(2)
to read lock the file, and will wait if blocked by another
process (see man fcntl(2).)
The module eval.c contains postfix_eval(), which is
called for each file searched in searchfile.c to compute
the relevance of the file by evaluating the postfix
notation list-the functions that compute the "and," "or,"
and "not" evaluations are contained in this module. If the
value of the relevance computed is greater than zero, an
element is allocated, and added to the relevance
list. This module also contains a description of how the
document's relevance is determined.
The module qsortlist.c is a general function that
is used to quick sort a linked list-in this case the
relevance list-and is called from main() in
rel.c.
The module rel.c contains main(), which is the main
dispatch function to all program operations.
The module relclose.c is called to shut down all
operations, allocated memory, and close all directories
and files that may have been opened by this program. For
specifics, see below under "Exception and fault handling,"
and relclose.c.
The module message.c is a general error message
look up table, for printing error message in a systematic
manner, for all modules in the program. This module may
contain port specific error messages that are unique to a
specific operating system. For specifics, see
message.c.
The module version.c contains only the version of
the program, and serves as a place holder for information
from the revision control system for automatic version
control.
The module stack.h contains defines for all list
operations in all modules. The lists are treated as
"stacks," and this module contains the PUSH() and POP()
defines for the stack operations. This module is general,
and is used on many different types of data
structures. For structure element requirements, see
stack.h.
The module memalloc.c is used as a general memory
allocation routine, and contains functions for allocating
memory, and making a list of the allocated the memory
areas, such that it may be deallocated when the program
exits, perhaps under exception or fault conditions.
Note that all file and directory operations are POSIX
compliant for portability reasons.
Exception and fault handling:
Since this program is a full text information retrieval
system, it is not unreasonable to assume that some of the
modules may find application in client/server
architectures. This places constraints on how the program
handles fault and exception issues. Note that it is not
unreasonable to assume that signal interrupt does NOT cause
the program to exit in a client/server environment, and,
therefore, there can be no reliance on exit() to deallocate
memory, close files and directories, etc. Specifically, the
program must be capable of vectoring to a routine that
deallocates any and all memory that has been allocated, and
closes all files and directories that have been opened to
prevent "memory leaks" and file table overflows. Since the
modules are involved in list operations, in recursive
functions, a strategy must be deployed that unconditionally
deallocates all allocated memory, closes all files and
directories, and resets all variables in the program the to
their initial "state."
The basic strategy to address the issues of exception and
fault handling in client/server architectures is to
Centralize memory allocation, and file and directory
functions in such a manner that shutdown routines can be
called from relclose() that will deallocate all memory
allocated (memdealloc() in memalloc.c,) and close any files
and/or directories (int_searchfile () in searchfile.c, and
int_searchpath () in searchpath.c,) that may have been
opened. The function, relclose() in relclose.c, is installed
as an "interrupt handler," in main(), in rel.c.
Constructional and stylistic issues follow,
generally, a compromise agreement with the following
references:
"C A Reference Manual", Samuel P. Harbison, Guy L.
Steele Jr. Prentice-Hall. 1984
"C A Reference Manual, Second Edition", Samuel P.
Harbison, Guy L. Steele Jr. Prentice-Hall, 1987
"C Programming Guidelines", Thomas Plum. Plum
Hall, 1984
"C Programming Guidelines, Second Edition", Thomas
Plum. Plum Hall, 1989
"Efficient C", Thomas Plum, Jim Brodie. Plum Hall,
1985
"Fundamental Recommendations on C Programming
Style", Greg Comeau. Microsoft Systems Journal, vol 5,
number 3, May, 1990
"Notes on the Draft C Standard", Thomas Plum. Plum
Hall, 1987
"Portable C Software", Mark R. Horton. Printice
Hall, 1990
"Programming Language - C", ANSI X3.159-1989.
American National Standards Institute, 1989
"Reliable Data Structures", Thomas Plum. Plum
Hall, 1985
"The C Programming Language", Brian W. Kernighan
and Dennis M. Ritchie. Printice-Hall, 1978
Since each module is autonomous, (with the exception of
service functions) each module has an associated ".h"
include file that declares function prototypes of external
scoped variables and functions. These files are are made
available to other modules by being included in rel.h, which
is included in all module's "c" source file. One of the
issues is that an include file may not have been read before
a variable declared in the include file is used in another
include file, (there are several circular dependencies in
the include files.) To address this issue, each module's
include file sets a variable, the first time it is read by
the compiler, and if this variable is set, then any
subsequent reads will be skipped. This variable name is
generally of the form of the module name, concatenated with
"_H".
Each "c" source file and associated include file has an
"rcsid" static character array that contains the revision
control system "signatures" for that file. This information
is included, for both the "c" source file and its associated
include file, in all object modules for audit and
maintenence.
If the stylistics listed below are annoying, the indent
program from the gnu foundation, (anonymous ftp to
prep.ai.mit in /pub/gnu,) is available to convert from these
stylistics to any desirable.
Both ANSI X3.159-1989 and Kernighan and Ritchie standard
declarations are supported, with a typical construct:
#ifdef __STDC__
ANSI declarations.
#else
K&R declarations.
#endif
Brace/block declarations and constructs use the
stylistic, for example:
for (this < that; this < those; this ++)
{
that --;
}
as opposed to:
for (this < that; this < those; this ++) {
that --;
}
Nested if constructs use the stylistic, for example:
if (this)
{
if (that)
{
.
.
.
}
}
as opposed to:
if (this)
if (that)
.
.
.
The comments in the source code are verbose, and beyond
the necessity of commenting the program operation, and the
one liberty taken was to write the code on a 132 column
display. Many of the comments in the source code occupy the
full 132 columns, (but do not break up the code's flow with
interline comments,) and are incompatible with text editors
like vi(1). If the verbose comments are annoying, see the
./README file for a sed(1) script to remove the
comments.
- John Conover
- john@email.johncon.com
- January 6, 2004
A license is hereby granted to reproduce this software
source code and to create executable versions from this source
code for personal, non-commercial use. The copyright notice
included with the software must be maintained in all copies
produced.
THIS PROGRAM IS PROVIDED "AS IS". THE AUTHOR PROVIDES NO
WARRANTIES WHATSOEVER, EXPRESSED OR IMPLIED, INCLUDING
WARRANTIES OF MERCHANTABILITY, TITLE, OR FITNESS FOR ANY
PARTICULAR PURPOSE. THE AUTHOR DOES NOT WARRANT THAT USE OF
THIS PROGRAM DOES NOT INFRINGE THE INTELLECTUAL PROPERTY
RIGHTS OF ANY THIRD PARTY IN ANY COUNTRY.
So there.
Copyright © 1994-2011, John Conover, All Rights
Reserved.
Comments and/or bug reports should be addressed to:
- john@email.johncon.com
- http://www.johncon.com/
- http://www.johncon.com/ntropix/
- http://www.johncon.com/ndustrix/
- http://www.johncon.com/nformatix/
- http://www.johncon.com/ndex/
- John Conover
- john@email.johncon.com
- January 6, 2004
|