summaryrefslogtreecommitdiff
path: root/torrus/doc/devdoc
diff options
context:
space:
mode:
Diffstat (limited to 'torrus/doc/devdoc')
-rw-r--r--torrus/doc/devdoc/architecture.pod511
-rw-r--r--torrus/doc/devdoc/devdiscover.pod296
-rw-r--r--torrus/doc/devdoc/progstyle.pod138
-rw-r--r--torrus/doc/devdoc/reqs.0.0.pod166
-rw-r--r--torrus/doc/devdoc/reqs.0.1.pod210
-rw-r--r--torrus/doc/devdoc/torrus_roadmap.pod249
-rw-r--r--torrus/doc/devdoc/wd.distributed.pod198
-rw-r--r--torrus/doc/devdoc/wd.messaging.pod128
-rw-r--r--torrus/doc/devdoc/wd.monitor-escalation.pod117
-rw-r--r--torrus/doc/devdoc/wd.uptime-mon.pod162
10 files changed, 2175 insertions, 0 deletions
diff --git a/torrus/doc/devdoc/architecture.pod b/torrus/doc/devdoc/architecture.pod
new file mode 100644
index 000000000..4cf9c9ccb
--- /dev/null
+++ b/torrus/doc/devdoc/architecture.pod
@@ -0,0 +1,511 @@
+# architecture.pod: The Torrus internals
+# Copyright (C) 2002-2005 Stanislav Sinyagin
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+# $Id: architecture.pod,v 1.1 2010-12-27 00:04:37 ivan Exp $
+# Stanislav Sinyagin <ssinyagin@yahoo.com>
+#
+#
+
+=head1 Torrus Framework Architecture
+
+=head2 Configuration Processing
+
+The XML configuration is compiled into the database representation by
+operator's manual request.
+
+The compiled version of configuration is not a one-to-one
+representation of the XML version. All aliases and templates are
+expanded. The backwards restoration of XML from the database
+is available with the snapshot utility.
+
+Aliases are the way to represent the data in a more convenient format.
+An alias can point to a subtree or a leaf, and it works similarly as
+a symbolic link in a filesystem.
+
+A template defines a piece of configuration which can be used in
+multiple places. Templates can be nested.
+
+The configuration can consist of several XML files. They will be
+processed in the specified order. Each new file is treated as an additive
+information to the existing tree.
+
+The XML configuration compiler validates all the mandatory parameters.
+
+
+=head2 Database Architecture
+
+All runtime data is stored in
+B<Berkeley DB> database environment (http://www.sleepycat.com).
+
+The compiled version of the configuration XML is stored in
+the B<ds_config_DSINST.db> and B<other_config_OINST.db>.
+These databases have similar structure, and
+B<ds_config_DSINST.db> keeps all datasource-related information.
+C<DSINST> and C<OINST> stand for the productive instance number,
+and have values of 0 or 1.
+Current productive instance numbers are stored in B<db_config_instances.db>
+database.
+
+For each datasource tree, the database files are resided in
+F</var/torrus/db/sub/E<gt>tree_nameE<lt>> directory.
+
+The runtime modules access the configuration via C<ConfigTree> objects.
+
+Each datasource subtree or leaf is identified by a I<token>.
+A token is a short alphanumeric string, generated internally.
+Two types of tokens are recognized: single tokens and token sets.
+
+Single token starts with letter I<T>. The rest is made with decimal digts.
+
+Token set name starts with letter I<S>. The rest is an arbitrary sequence of
+word characters.
+
+The special token I<SS> is reserved for tokensets list. Also tokenset
+parameters are inherited from this token's parameters.
+
+View and monitor names must be unique, and must
+start with a lower case letter.
+
+B<db_config_instances.db> is a I<Hash> database, with keys of form
+C<ds:E<lt>tree_nameE<gt>> or C<other:E<lt>tree_nameE<gt>>, and 0 or 1 as
+values. Also the compiler adds an entry C<compiling:E<lt>tree_nameE<gt>>
+during the compilation time, in order to avoid two compiler processes
+running at the same time on the same tree.
+
+B<ds_config_DSINST.db> is a I<Btree> database, with the keys and values
+defined as follows:
+
+=over 4
+
+=item * tp:E<lt>pathE<gt> -- E<lt>tokenE<gt>
+
+Gives the token correspondig to the given element name.
+
+=item * pt:E<lt>tokenE<gt> -- E<lt>pathE<gt>
+
+Gives the path name by the given token.
+
+=item * c:E<lt>tokenE<gt> -- E<lt>ctokenE<gt>,...
+
+For given subtree, contains the list of child tokens separated by comma.
+
+=item * p:E<lt>tokenE<gt> -- E<lt>ptokenE<gt>
+
+For given subtree or leaf, contains the parent token.
+
+=item * P:E<lt>tokenE<gt>:E<lt>pnameE<gt> -- E<lt>valueE<gt>
+
+Contains the parameter value for specified leaf or subtree.
+Each leaf or subtree inherits parameters from its parent.
+Thus, we must climb up the tree in order to get the parameter's
+value if not defined locally.
+
+=item * Pl:E<lt>tokenE<gt> -- E<lt>pnameE<gt>,...
+
+Contains the list of parameter names for specified leaf or subtree.
+
+=item * a:E<lt>tokenE<gt> -- E<lt>tokenE<gt>
+
+If this subtree or leaf is an alias, specifies the reference to the real node.
+
+=item * ar:E<lt>tokenE<gt> -- E<lt>tokenE<gt>,...
+
+Specifies all alias subtrees or leaves pointing to this token.
+
+=item * d:E<lt>nameE<gt> -- E<lt>valueE<gt>
+
+Definition value for the given name
+
+=item * D: -- E<lt>nameE<gt>,E<lt>nameE<gt>,...
+
+List of all known definitions
+
+=item * n:E<lt>tokenE<gt> -- E<lt>typeE<gt>
+
+Defines a node type. Type is a number with the following values:
+0 for subtree, 1 for leaf, 2 for alias.
+
+=back
+
+B<other_config_OINST.db> is a I<Btree> database, with the keys and values
+defined as follows:
+
+=over 4
+
+=item * ConfigurationReady -- 1:0
+
+When nonzero, the configuration is ready for usage.
+Otherwise, the configuration status is undefined.
+
+=item * P:E<lt>nameE<gt>:E<lt>pnameE<gt> -- E<lt>valueE<gt>
+
+Contains the parameter value for specified view, monitor or action.
+
+=item * Pl:E<lt>nameE<gt> -- E<lt>pnameE<gt>,...
+
+Contains the list of parameter names for specified view,
+monitor, or action.
+
+=item * V: -- E<lt>vnameE<gt>,...
+
+Specifies comma-separated list of all views defined.
+
+=item * v:E<lt>tokenE<gt> -- E<lt>vnameE<gt>,...
+
+Specifies comma-separated list of view names for the path given.
+The first view in the list is interpreted as default.
+
+=item * M: -- E<lt>mnameE<gt>,...
+
+Specifies comma-separated list of all monitor names defined
+
+=item * A: -- E<lt>anameE<gt>,...
+
+Comma-separated list of actions defined
+
+=back
+
+
+
+
+B<paramprops_DSINST.db> is a I<Btree> database for storing the
+datasource parameter properties, such as expandable, list parameters,
+searchable, etc.:
+
+=over 4
+
+=item * E<lt>pnameE<gt>:E<lt>propertyE<gt> -- E<lt>valueE<gt>
+
+=back
+
+
+
+
+
+B<aliases_DSINST.db> is a I<Btree> database with alias paths as keys
+and target tokens as values. It is used for quick alias expansion.
+
+B<tokensets_DSINST.db> is a I<Hash> database containing the token sets.
+The keys and values are as follows:
+
+=over 4
+
+=item * S: -- E<lt>tokensetE<gt>,...
+
+Keeps the list of all known token sets.
+
+=item * s:E<lt>tokensetE<gt> -- E<lt>tokenE<gt>,...
+
+For given tokenset, keeps its contents.
+
+=item * o:E<lt>tokensetE<gt>:E<lt>tokenE<gt> -- E<lt>originE<gt>
+
+Defines the origin of the member. Currently two types of origin are known:
+C<static> and C<monitor>
+
+=back
+
+B<nodepcache_DSINST.db> is a I<Btree> database containing the cached
+node parameter values. The keys and values are as follows:
+
+=over 4
+
+=item * E<lt>nameE<gt>:E<lt>pnameE<gt> -- E<lt>statusE<gt>E<lt>valueE<gt>
+
+Keeps the status and the value for a given token and parameter.
+Status is a one-byte prefix, with values C<U> for undefined parameter, and
+C<D> for a parameter with value.
+
+=back
+
+
+B<nodeid_DSINST.db> is a I<Btree> database that stores the mapping between
+NodeID values and tokens. Database keys are NodeID strings, and values
+are tokens. One NodeID corresponds to maximum one token.
+
+
+
+B<config_readers.db> is a I<Hash> database which contains the informaton
+about active processes which read the configuration. The configuration
+compiler waits until all readers finish using the current configuration
+database. Process IDs are used as keys, and the values contain timestamps.
+
+B<timestamps.db> is a I<Hash> database containing various kinds of
+timestamps. The timestamp name is the key, and the number of seconds
+since epoch is the value.
+
+B<render_cache.db> keeps the status information about the graphs
+ready to display. Last known timestamp of the configuration is
+compared with the actual one. When the actual timestamp
+differs from known, the renderer cache is cleaned up.
+This is a I<Hash> database, with the following
+keys and values:
+
+=over 4
+
+=item * E<lt>tokenE<gt>:E<lt>vnameE<gt> --
+ E<lt>t_renderE<gt>:E<lt>t_expiresE<gt>:E<lt>filenameE<gt>:E<lt>mime_typeE<gt>
+
+For the leaf/subtree and view name given, specifies two timestamps: the
+moment of last rendering and expiration time. The filename is automatically
+generated unique name in the spool directory. The contents type is determined
+by the MIME type.
+
+=back
+
+B<monitor_cache.db> is a I<Hash> database used in order to avoid the
+unneccessary configuration tree walk. The keys are the leaf tokens, and
+the values are comma-separated monitor names. At each monitor invocation,
+the confguration timestamp is compared against the last known, and the
+cache database is rebuilt if needed.
+
+B<monitor_alarms.db> is a I<Hash> database that keeps alarm status information
+from previous runs of Monitor, with the keys and values as follows:
+
+=over 4
+
+=item * E<lt>mnameE<gt>:E<lt>pathE<gt> --
+E<lt>t_setE<gt>:E<lt>t_expiresE<gt>:E<lt>statusE<gt>:
+E<lt>t_last_changeE<gt>
+
+Key consists of the monitor name and leaf path. In the value, B<t_set>
+is the time when the alarm was raised. If two subsequent runs of Monitor
+raise the same alarm, B<t_set> does not change. B<t_expires> is the
+timestamp that shows when it's still important to keep the entry after the
+alarm is cleared. B<status> is 1 if the alarm is active, and 0 otherwise.
+B<t_last_change> is the timestamp of last status change.
+
+When B<status> is 1, the record is kept regardless of timestamps.
+When B<status> is 0, and the current time is more than B<t_expires>,
+the record is not reliable and may be deleted by Monitor.
+
+=back
+
+B<collector_tokens_X_Y.db> is a I<Hash> database used in order to avoid the
+unneccessary configuration tree walk. X is the collector instance number, and
+Y is the datasource configuration instance number.
+Keys and values are as follows:
+
+=over 4
+
+=item * E<lt>tokenE<gt> -- E<lt>periodE<gt>:E<lt>offsetE<gt>
+
+For each leaf token, period and time offset values are stored.
+
+=back
+
+
+B<scheduler_stats.db> is a I<Btree> database which stores the runtime
+statistics of Scheduler tasks. Each key is of structure
+B<E<lt>tasknameE<gt>:E<lt>periodE<gt>:E<lt>offsetE<gt>:E<lt>variableE<gt>>,
+and the value is a number representing the current value of the variable.
+Depending on variable purpose, the number is floating point or integer.
+
+
+B<users.db> is a I<Hash> database containing user details, passwords,
+and group membership:
+
+=over 4
+
+=item * ua:E<lt>uidE<gt>:E<lt>attrE<gt> -- E<lt>valueE<gt>
+
+User attributes, such as C<cn> (Common name) or C<userPassword>, are stored
+here. For each user, there is a record consisting of the attribute C<uid>,
+with the value equal to the user identifier.
+
+=item * uA:E<lt>uidE<gt> -- E<lt>attrE<gt>, ...
+
+Comma-separated list of attribute names for the given user.
+
+=item * gm:E<lt>uidE<gt> -- E<lt>groupE<gt>, ...
+
+For each user ID, stores the comma-separated list of groups it belongs to.
+
+=item * ga:E<lt>groupE<gt>:E<lt>attrE<gt> -- E<lt>valueE<gt>
+
+Group attributes, such as group description.
+
+=item * gA:E<lt>groupE<gt> -- E<lt>attrE<gt>, ...
+
+Comma-separated list of attribute names for the given group.
+
+=item * G: -- E<lt>groupE<gt>, ...
+
+List of all groups
+
+=back
+
+
+B<acl.db> is a I<Hash> database containing group privileges information:
+
+=over 4
+
+=item * u:E<lt>groupE<gt>:E<lt>objectE<gt>:E<lt>privilegeE<gt> -- 1
+
+The entry exists if and only if the group members have this privilege
+over the object given. Most common privilege is C<DisplayTree>, where
+the object is the tree name.
+
+=back
+
+
+B<serviceid_params.db> is a I<Btree> database containing properties
+for each Service ID (exported collector information, usually stored in
+an SQL database):
+
+=over 4
+
+=item * a: E<lt>serviceidE<gt>,...
+
+Lists all known service IDs
+
+=item * t:E<lt>treeE<gt> -- E<lt>serviceidE<gt>,...
+
+Lists service IDs exported by a given datasource tree.
+
+=item * p:E<lt>serviceidE<gt>:E<lt>paramE<gt> -- E<lt>valueE<gt>
+
+Parameter value for a given service ID. Mandatory parameters are:
+C<tree>, C<token>, C<dstype>. Optional: C<units>.
+
+=item * P:E<lt>serviceidE<gt> -- E<lt>paramE<gt>, ...
+
+List of parameter names for a service ID.
+
+=back
+
+
+B<searchwords.db> is a I<Btree> database with DB_DUP and DB_DUPSORT flags.
+It contains the search strings for the given tree:
+
+=over 4
+
+=item * E<lt>keywordE<gt> -- E<lt>pathE<gt>[:E<lt>paramE<gt>]
+
+For a given keyword, refer to a path of a node that contains this word.
+If the node name matches the keyword, the I<param> element
+is omitted. Otherwise it refers to the parameter that matches the keyword.
+
+=back
+
+
+
+B<globsearchwords.db> is a I<Btree> database with DB_DUP and DB_DUPSORT flags.
+It contains the search strings for all trees:
+
+=over 4
+
+=item * E<lt>keywordE<gt> -- E<lt>treeE<gt>:E<lt>pathE<gt>[:E<lt>paramE<gt>]
+
+For a given keyword, refer to a path of a node that contains this word.
+If the node name matches the keyword, the I<param> element
+is omitted. Otherwise it refers to the parameter that matches the keyword.
+
+=back
+
+
+B<snmp_failures_X.db> is a I<Btree> database containing SNMP collector
+failures information for a given collector instance for a tree.
+
+=over 4
+
+=item * c:E<lt>counterE<gt> -- E<lt>NE<gt>
+
+A counter with a name. Known names: I<unreachable>, I<removed>.
+
+
+=item * h:E<lt>hosthashE<gt> -- E<lt>failureE<gt>:E<lt>timestampE<gt>
+
+SNMP host failure information. Hosthash is a concatenation of hostname, UDP
+port, and SNMP community, separated by "|". Known failures: I<unreachable>,
+I<removed>. Timestamp is a UNIX time of the event.
+
+=item * m:E<lt>hosthashE<gt> -- E<lt>pathE<gt>:E<lt>timestampE<gt>
+
+MIB failures (I<noSuchObject>, I<noSuchInstance>, and I<endOfMibView>)
+for a given host, with the tree path of their occurence and the UNIX timestamp.
+
+=item * M:E<lt>hosthashE<gt> -- E<lt>NE<gt>
+
+Count of MIB failures per SNMP host.
+
+=back
+
+
+
+
+
+
+
+=head2 Modular Structure
+
+The Torrus framework consists of several functional modules:
+
+=over 4
+
+=item * Configuration management
+
+Once the configuration XML files get changed, the configuration compiler
+should be run manually. This guarantees that the actual framework
+configuration is changed only when the files are ready.
+
+The configuration management module provides access methods for
+enumeration and enquery of the configuratin objects.
+
+=item * Data Collector module
+
+Collector program runs as a separate process for each datasource tree.
+Upon startup, it first runs all registered collectors. After that,
+the collectors are grouped depending on period and time offset, and launched
+periodically at the moments defined by formula:
+
+ time + period - (time mod period) + timeoffset
+
+The datasources are grouped by collector type.
+For SNMP collector type, the datasources are grouped by host.
+SNMP requests are sent in non-blocking mode (see Net::SNMP Perl module
+manual).
+
+For each SNMP host, system uptime is verified. For RRD datasource types
+"COUNTER", if the device reload is
+detected, the corresponding RRD file is updated with "undefined"
+value at the calculated moment of reload.
+
+=item * Data threshold monitoring
+
+This module performs the monitoring tasks periodically, based on each
+monitored leaf schedule.
+It checks the conditions for each leaf having a monitor.
+In case of the alarm, it executes the action instructions synchronously.
+
+=item * Rendering module
+
+Upon a request, this module generates the graph and HTML files for the
+requested view and its subviews. It first checks availability of
+cached objects and avoids unneeded regeneration. It must be possible
+to force the renderer to flush the cache.
+
+=item * Web interface module
+
+Web interface module passes the Renderer output to an HTTP client.
+
+
+=back
+
+=head1 Author
+
+Copyright (c) 2002-2005 Stanislav Sinyagin ssinyagin@yahoo.com
diff --git a/torrus/doc/devdoc/devdiscover.pod b/torrus/doc/devdoc/devdiscover.pod
new file mode 100644
index 000000000..8386c1755
--- /dev/null
+++ b/torrus/doc/devdoc/devdiscover.pod
@@ -0,0 +1,296 @@
+# devdiscover.pod - Guide to devdiscover
+# Copyright (C) 2003 Shawn Ferry, Stanislav Sinyagin
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+# $Id: devdiscover.pod,v 1.1 2010-12-27 00:04:36 ivan Exp $
+# Shawn Ferry <sferry at sevenspace dot com> <lalartu at obscure dot org>
+# Stanislav Sinyagin <ssinyagin@yahoo.com>
+#
+
+=head1 Torrus SNMP Device Discovery Developer's Guide
+
+=head2 C<devdiscover> overview
+
+C<devdiscover> is an extensible, module based, SNMP device discovery
+utility. It is intended to automatically generate Torrus configuration
+files, based on SNMP discovery results and templates.
+
+See I<Torrus Command Reference> for command usage and functionality overview.
+
+In general, C<devdiscover> consists of the following files and functional
+parts:
+
+=over 4
+
+=item * C<bin/devdiscover.in>
+
+This file is installed as C<bin/devdiscover> in Torrus installation directory,
+with certain variables substituted. The program provides all the commandline
+functionality and options processing. Once the CLI options are processed and
+verified, the control is passed to the C<Torrus::DevDiscover> object.
+
+=item * C<Torrus::DevDiscover>
+
+This Perl module is responsible for the SNMP discovery process organization:
+
+=over 8
+
+=item *
+
+it registers the discovery modules;
+
+=item *
+
+establishes an SNMP session to the target host;
+
+=item *
+
+initiates a new C<Torrus::DevDiscover::DevDetails> object for the target host;
+
+=item *
+
+stores the connection-specific parameters to the device object;
+
+=item *
+
+for each registered discovery module, executes C<checkdevtype()> in
+I<sequential> order;
+
+=item *
+
+for those discovery modules which paid interest in this target host,
+executes C<discover()> in I<sequential> order;
+
+=item *
+
+upon request from C<bin/devdiscover>, builds the configuration
+XML tree, by calling C<buildConfig()> in I<sequential> order for each
+relevant discovery module for each target host.
+
+=back
+
+=item * C<Torrus::DevDiscover::DevDetails>
+
+This Perl module is defined in F<perllib/Torrus/DevDiscover.pm>, and provides
+the functionality to store the results of SNMP device discovery.
+
+=item * C<Torrus::ConfigBuilder>
+
+This module is an encapsulation wrapper for XML configuration builder.
+It provides methods for every element of Torrus configuration.
+
+=item * Discovery Modules
+
+These provide all the functionality for SNMP discovery. Normally
+one module covers one MIB, or sometimes several vendor-specific MIBs,
+and it is responsible for finding out the device details necessary
+for Torrus configuration building. Usually a discovery module refers to one or
+several I<template definition files>. A module may depend on
+other modules' discovery results. This is controlled by its
+C<sequence number>. Vendor-independent discovery modules are normally named
+as C<Torrus::DevDiscover::RFCXXXX_SOME_HUMAN_NAME>, and vendor-specific
+ones are named as C<Torrus::DevDiscover::Vendor[Product[Subsystem]]>.
+
+=item * Template definition files
+
+These are XML documents residing in F<xmlconfig/vendor> and
+F<xmlconfig/generic> directories. Each file is a piece of Torrus configuration,
+and contains definitions and templates for particular MIB or vendor.
+Generic template definition files are for vendor-independent MIBs,
+and normally they are named as F<rfcXXXX.some-human-name.xml>.
+Vendor-specific files are named as F<vendor.product[.subsystem].xml>.
+
+=back
+
+
+=head2 Discovery Module Internals
+
+Discovery modules are Perl packages with few required components.
+Before creating your own modules, please read and follow
+I<Torrus Programming Style Guide>.
+
+Upon initialization, C<Torrus::DevDiscover> loads the modules listed in
+C<@Torrus::DevDiscover::loadModules> array. This array is pre-populated
+by standard module names in F<devdiscover-config.pl>.
+You can add new module names by pushing them onto this array in your
+local F<devdiscover-siteconfig.pl>.
+
+=head3 Module Registration
+
+Each discovery module should register itself in DevDiscover registry.
+Normally there's only one registry entry per discovery module, though
+it's not a limitation. The registry entry is identified by a registry
+name, which normally repeats the module name.
+
+Example:
+
+ $Torrus::DevDiscover::registry{'RFC2790_HOST_RESOURCES'} = {
+ 'sequence' => 100,
+ 'checkdevtype' => \&checkdevtype,
+ 'discover' => \&discover,
+ 'buildConfig' => \&buildConfig
+ };
+
+Each registry entry must contain 4 fields:
+
+=over 4
+
+=item * C<sequence>
+
+The sequence number determines the order in which every discovery module's
+procedure is executed. Sequence numbers of dependant modules must
+be higher than those of their dependencies.
+
+Generic MIB discovery modules should have the sequence number 100. If
+a particular generic module depends on other generic modules, its sequence
+number may be 110.
+
+Vendor-specific modules should have the sequence number 500.
+Vendor-specific modules that depend on other vendor-specific modules,
+should have sequence number 510.
+
+Dependencies deeper than one level may exist, but it's recommended
+to avoid them. For most cases this should be enough.
+
+Exception is made for C<RFC2863_IF_MIB> module, which has the sequence
+number 50. That is because it provides the basic interface discovery,
+and many other modules depend on its results.
+
+Another exception is vendor-specific modules where the SNMP session parameters
+must be set earliest possible. One of such parameters is C<snmp-max-msg-size>.
+Some vendor SNMP agents would not be walked properly without this setting.
+In these occasions, the sequence number is below 50. The recommended value
+is 30.
+
+=item * C<checkdevtype>
+
+Must be a subroutine reference. This subroutine is called with two object
+references as arguments: C<Torrus::DevDiscover> and
+C<Torrus::DevDiscover::DevDetails>.
+The purpose of this subroutine is to determine if the target host is
+of required type, or if it supports the required MIB.
+The subroutine should return true if and only if the target host
+supports the MIB variables this module is supposed to discover.
+
+In general, C<checkdevtype> subroutine is small, and checks one or several
+OIDs presence on the host, or their values, e.g. the value of I<sysObjectID>
+variable. It should perform as less as possible SNMP requests, in order to
+speed up the pre-discovery process.
+
+=item * C<discover>
+
+Must be a subroutine reference. This subroutine is called with the same
+two arguments as C<checkdevtype()>. It is called for those modules only,
+whose C<checkdevtype()> has returned true. The subroutine should return true
+if no errors occured during the discovery.
+
+The purpose of C<discover()> is to perform the actual SNMP discovery,
+and prepare the parameter values for future XML configuration.
+
+=item * C<buildConfig>
+
+Must be a subroutine reference. This subroutine is called with three object
+references as arguments: C<Torrus::DevDiscover::DevDetails>,
+C<Torrus::ConfigBuilder>, and an XML element object, which should be used only
+to pass data to ConfigBuilder methods.
+
+This subroutine is designed to construct the resulting XML configuration
+subtree as a child of a given XML element. Upper level subtrees
+are handled by CLI options processing code.
+
+=back
+
+
+=head3 OID Definitions
+
+OID definitions are designed to provide symbolic names to OIDs
+in numerical notation. Normally the symbolic names repeat the names from
+corresponding MIBs.
+
+The definitions must be defined in an C<oiddef> hash defined in the
+package namespace. Then they are automatically imported by DevDiscover
+initialization procerure.
+
+Example:
+
+ our %oiddef =
+ (
+ 'hrSystemUptime' => '1.3.6.1.2.1.25.1.1.0',
+ 'hrSystemNumUsers' => '1.3.6.1.2.1.25.1.5.0',
+ 'hrSystemProcesses' => '1.3.6.1.2.1.25.1.6.0',
+ 'hrSystemMaxProcesses' => '1.3.6.1.2.1.25.1.7.0',
+ 'hrMemorySize' => '1.3.6.1.2.1.25.2.2.0',
+ 'hrStorageTable' => '1.3.6.1.2.1.25.2.3.1',
+ 'hrStorageIndex' => '1.3.6.1.2.1.25.2.3.1.1',
+ 'hrStorageType' => '1.3.6.1.2.1.25.2.3.1.2',
+ 'hrStorageDescr' => '1.3.6.1.2.1.25.2.3.1.3',
+ 'hrStorageAllocationUnits' => '1.3.6.1.2.1.25.2.3.1.4',
+ 'hrStorageSize' => '1.3.6.1.2.1.25.2.3.1.5',
+ 'hrStorageUsed' => '1.3.6.1.2.1.25.2.3.1.6',
+ 'hrStorageAllocationFailures' => '1.3.6.1.2.1.25.2.3.1.7'
+ );
+
+
+=head3 Template References
+
+Normally a discovery module would refer to configuration templates
+defined in template definition files. In order to provide an extra level of
+flexibility, these templates should be defined in
+F<devdiscover-config.pl> or in F<devdiscover-siteconfig.pl>.
+
+It is recommended that the template references in the discovery modules
+follow the naming standard: C<module::template-name>.
+
+ConfigBuilder's C<addTemplateApplication()> method looks up every
+template name in the global hash C<%Torrus::ConfigBuilder::templateRegistry>
+and figures out the source XML file and the actual template name.
+
+Example:
+
+ $Torrus::ConfigBuilder::templateRegistry{
+ 'RFC2790_HOST_RESOURCES::hr-system-uptime'} = {
+ 'name' => 'mytest-hr-system-uptime',
+ 'source' => 'mytest.templates.xml'
+ };
+
+
+=head3 Interface filtering
+
+Usually not all interfaces from ifTable need to be monitored.
+For example, Loopback and Null0 interfaces on Cisco routers.
+
+C<Torrus::DevDiscover::RFC2863_IF_MIB> provides the functionality to
+automatically filter out the interfaces, based on filter definitions.
+Filter definitions are registered by calling the subroutine
+C<Torrus::DevDiscover::RFC2863_IF_MIB::addInterfaceFilter
+($devdetails, $interfaceFilter)>. The second argument is a reference
+to a hash of the following structure:
+
+Keys are symbolic names that mean nothing and need only to be unique.
+Values are hash references with the following entries: C<ifType>
+specifies the IANA interface type, and optional C<ifDescr> specifies
+a regular expression to match against interface description.
+
+The filters are usually registered within C<checkdevtype> subroutine
+of the vendor module, after the device type is identified. See
+F<CiscoIOS.pm> and F<CiscoCatOS.pm> as examples.
+
+
+=head2 Authors
+
+Shawn Ferry: initial draft.
+
+Stanislav Sinyagin: revision and detailed content.
diff --git a/torrus/doc/devdoc/progstyle.pod b/torrus/doc/devdoc/progstyle.pod
new file mode 100644
index 000000000..e9ebef58a
--- /dev/null
+++ b/torrus/doc/devdoc/progstyle.pod
@@ -0,0 +1,138 @@
+# rpnexpr.pod - Torrus RPN expressions guide
+# Copyright (C) 2002 Stanislav Sinyagin
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+# $Id: progstyle.pod,v 1.1 2010-12-27 00:04:37 ivan Exp $
+# Stanislav Sinyagin <ssinyagin@yahoo.com>
+#
+#
+
+=head1 Torrus Programming Style Guide
+
+=head2 Perl indentation style
+
+The code indentation style is a kind of BSD/Allman style:
+
+ while( not $success and time() < $waitingTimeout )
+ {
+ $self->clearReader();
+
+ Info('Sleeping ' . $Torrus::Global::ConfigReadyRetryPeriod .
+ ' seconds');
+ sleep $Torrus::Global::ConfigReadyRetryPeriod;
+
+ $self->setReader();
+
+ if( $self->isReady() )
+ {
+ $success = 1;
+ Info('Now configuration is ready');
+ }
+ else
+ {
+ Info('Configuration is still not ready');
+ }
+ }
+
+
+Indentation is 4 characters. Opening and closing braces are aligned.
+There's no space between the keyword (C<while>, C<if>, etc.) and the opening
+parenthesis.
+
+Tab characters are prohibited.
+
+Page width is strictly 80 characters. All longer lines must be wrapped.
+
+When possible, leave space between parentheses and the inside content.
+This is not necessary for debug or print statements.
+
+There's always space around the equal sign (C<=>).
+
+The object method calls always have parentheses, even if no arguments are
+reqiured.
+
+Use keywords for logical operations instead of C operators: C<and>, C<or>,
+C<not>.
+
+Use single quotes in hash references: C<$a-E<gt>{'abc'}>.
+
+=head2 Common file properties
+
+With the exception of special-purpose files, each source file
+must ontain the GNU copying statement, CVS C<Id> tag, and author's name and
+e-mail address.
+
+C, Perl, and Bourne shell files must contain Gnu Emacs variables
+at the end of the file:
+
+ # Local Variables:
+ # mode: perl
+ # indent-tabs-mode: nil
+ # perl-indent-level: 4
+ # End:
+
+Each file must always end with the linebreak. Otherwise it might conflict
+with CVS. All files must have Unix linebreak format.
+
+=head2 GNU Emacs settings
+
+Standard C<perl-mode.el> does the thing:
+
+ ;; Set up Perl mode
+ (autoload 'perl-mode "perl-mode")
+ (setq auto-mode-alist
+ (append (list (cons "\\.pl$" 'perl-mode)
+ (cons "\\.pm$" 'perl-mode)
+ (cons "\\.pl\\.cgi$" 'perl-mode))
+ auto-mode-alist))
+
+ (custom-set-variables
+ ;; custom-set-variables was added by Custom -- don't edit or cut/paste it!
+ ;; Your init file should contain only one such instance.
+ '(indent-tabs-mode nil)
+ '(tab-width 8)
+ )
+
+=head2 X-Emacs settings
+
+In X-Emacs, the default handler for Perl files is C<cperl-mode.el>.
+The following custom variables must be set in order to comply to our styling
+standards:
+
+ (custom-set-variables
+ ;; custom-set-variables was added by Custom -- don't edit or cut/paste it!
+ ;; Your init file should contain only one such instance.
+ '(cperl-brace-offset -4)
+ '(cperl-continued-statement-offset 4)
+ '(cperl-indent-level 4)
+ '(indent-tabs-mode nil)
+ '(tab-width 8)
+ )
+
+=head2 Normalizing multiple files
+
+In Torrus CVS repository, in the root of module C<src>, there is a small
+utility that fixes some styling issues for all the sources in
+current directory and subdirectories:
+
+ perl normalize-all-sources.pl
+
+It replaces tabs with spaces, deletes space at the end of line,
+and removes empty lines at the start and the end of file.
+
+=head1 Author
+
+Copyright (c) 2003-2005 Stanislav Sinyagin E<lt>ssinyagin@yahoo.comE<gt>
diff --git a/torrus/doc/devdoc/reqs.0.0.pod b/torrus/doc/devdoc/reqs.0.0.pod
new file mode 100644
index 000000000..7ed9511bc
--- /dev/null
+++ b/torrus/doc/devdoc/reqs.0.0.pod
@@ -0,0 +1,166 @@
+# requirements.pod: The pre-planning document
+# Copyright (C) 2002 Stanislav Sinyagin
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+# $Id: reqs.0.0.pod,v 1.1 2010-12-27 00:04:36 ivan Exp $
+# Stanislav Sinyagin <ssinyagin@yahoo.com>
+#
+#
+
+=head1 RRD Framework Requirements Version 0.0
+
+Date: Jul 10 2002
+
+This article defines some principles that a supposedly future
+RRD framework should have. The framework should consist of 3
+independent subsystems:
+
+=over 4
+
+=item Data Collection
+
+=item Data Monitoring
+
+=item Data Displaying
+
+=back
+
+=head2 Flexible Hierarchical Configuration
+
+Inspired by Cricket hierarchical configuration, we state here that
+the configuration should be hierarchical. Child nodes should
+inherit the properties from parents.
+
+The format of the configuration files has not to be neccessary
+as in Cricket. I'm not sure if it's worth keeping them in a directory
+structure representing the hierarchy tree, but it's definitive
+that multiple files should be supported.
+
+A good step ahead would be the configuration in XML format.
+It is also possible to have a converter from some other formats
+(plain text, or an SQL database) into XML which will be consumed by the
+framework.
+
+I leave the Data collection uncovered, since all of the existing
+RRD frontends do this part already.
+
+=head1 Data Monitoring Principles
+
+At the moment, the only known solution for RRD data monitoring is
+Cricket. Its threshold monitoring has certain limitation and drawbacks.
+Nevertheless, it may be used as the basis for the ideas in the further
+development.
+
+The major idea is to build data monitoring as a part of a bigger RRD
+framework, still being the independent part of the whole. The data can come
+from many differet sources, from RRDs produced by any of the existing
+and future frontends.
+
+=head2 File Naming Flexibility
+
+In most existing RRD frontends, each RRD datafile should be described
+individually. This is not very convenient, especially for the cases
+when you have several (dozens) files containing one type of data.
+(e.g., input traffic per source autonomous system).
+Also the files of same type can be created and deleted by their sourcing
+frontend, and it would be more convenient not having to change
+the monitoring configuration.
+
+Thus, we need a wildcards language which would allow to specify
+multiple files and derive the datasource names from thir names.
+
+=head2 Datasource Naming
+
+Each data being monitored (for RRDs, its definition specifies the
+E<lt>filename, DS, RRAE<gt> triple) has to have a universal name.
+The name can be fully or partly qualified, depending on the
+configuration tree. Examples of such data reference follow:
+
+ /Netflow/Exporters/63.2.3.224/if3/bps /* Interface #3 on router 63.2.3.224 */
+ /Netflow/Subnets/Dialin/bps /* Dial-in address pool */
+ /* different grouping for the rack temperature in Server Room 1 */
+ /Envmon/RackTemp/SR1
+ /SR1/Envmon/RackTemp
+
+Name aliasing should allow short or symbolic names for data sources:
+
+ /* Alias for /Netflow/Exporters/63.2.3.224/if3 */
+ /Netflow/Upstream/FranceTelecom1
+
+=head2 Monitoring Rules
+
+Data threshold monitoring should be described in a hierarchical
+manner.
+
+It would be interesting to have monitoring rules separate from
+the data hierarchy. On the other hand, 1) some data sources might need
+special and unique monitoring rules; 2) in some cases, several
+data sources need to be combined in order to build a threshold rule.
+I'm not yet sure how this must be achieved.
+
+=head2 Event Processing
+
+Once the threshold violation occurs, the monitoring system
+should produce the alarm event.
+
+Cricket has a good set of ways to report the alarm, and they can be taken
+as the basis.
+
+Also what Cricket is really missing, is displaying those data sources
+being alarmed. The Monitoring system should produce the instructions
+to the Displaying system in order to display the summary of those
+data sources which produce alarms within certain time.
+
+
+=head1 Data Displaying Principles
+
+View profiles should be configured in a hierarchical manner.
+
+Again as with data monitoring, some Views should be configured independently
+of the data hierarchy, but also some data should be able to define
+specific view profiles.
+
+There should be view profiles of different types:
+
+=over 4
+
+=item *
+
+HTML Framework. Defines the HTML elements that should be displayed around
+the graphs. It also should define the child graphs. Also it should define
+the controls which would cause the option changes in the child graphs
+(e.g., enabling "Show Holt-Winters Boundaries" would produce the
+corresponding graph).
+
+=item *
+
+Individual Graph. Defines the way the graph should look. It should
+also be capable of displaying an arbitrary number of data sources.
+It should have tunable options, like color, size, or time period.
+
+=back
+
+The Displaying system should allow the following ways of viewing:
+1) hierarchical browsing, like Cricket; 2) alarm summary display;
+3) individual graph display, without HTML surrounding.
+
+The graph images should be cashed and reused whenever possible.
+In alarm summary browsing, these images can be generated at the moment
+of the event.
+
+=head1 Author
+
+Copyright (c) 2002 Stanislav Sinyagin ssinyagin@yahoo.com
diff --git a/torrus/doc/devdoc/reqs.0.1.pod b/torrus/doc/devdoc/reqs.0.1.pod
new file mode 100644
index 000000000..49698d370
--- /dev/null
+++ b/torrus/doc/devdoc/reqs.0.1.pod
@@ -0,0 +1,210 @@
+# requirements.pod: The pre-planning document
+# Copyright (C) 2002 Stanislav Sinyagin
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+# $Id: reqs.0.1.pod,v 1.1 2010-12-27 00:04:36 ivan Exp $
+# Stanislav Sinyagin <ssinyagin@yahoo.com>
+#
+#
+
+=head1 RRFW Requirements Version 0.1
+
+Date: Jun 29 2003; Last revised: Aug 05 2003
+
+In this article, I describe the important changes that are planned
+for RRFW version 0.1.X.
+
+=head1 Independent datasource trees
+
+As noted by many users, RRFW lacks the scalability when the number of
+network devices is more than 100. The XML compiler takes minutes to
+process the configuration, and the Collector process initialization time
+is too long.
+
+Christian Schnidrig E<lt>christian.schnidrig@gmx.chE<gt> has proposed
+a solution to split the database into several subsystems, each
+being compiled separately, and with separate collector process.
+In his concept, there is a "global" datasource tree, and
+"subsystem" trees, each making a subset of global datasource nodes.
+
+I propose to have a number of independent datasource trees, without
+any superset. This would ease the administrator's work, and add more
+security.
+
+=head2 Changes in rrfw-siteconfig.pl
+
+Instead of C<@RRFW::Global::xmlFiles>, the following hash will contain
+the information about the trees:
+
+ %RRFW::Global::treeConfig = (
+ 'tree_A' => {
+ 'description' => 'The First Tree',
+ 'xmlfiles' => ['a1.xml', 'a2.xml', 'a3.xml'],
+ 'run' => { 'collector' => 1, 'monitor' => 1 } },
+ 'tree_B' => {
+ 'description' => 'The Second Tree',
+ 'xmlfiles' => ['b1.xml', 'b2.xml'],
+ 'run' => {} }
+ );
+
+In this hash, the keys give the tree names, I<xmlfiles> points to an array
+of source XML files, I<run> points to the names of the daemons that
+would be automatically launched for the tree.
+
+Two additional arrays: C<@RRFW::Global::xmlAlwaysIncludeFirst> and
+C<@RRFW::Global::xmlAlwaysIncludeLast> will give a list of source XML
+files that are included in every tree, in the beginning or in the end of
+the XML files list.
+
+=head2 ConfigTree object internals
+
+There will be no such thing as globalInstance. All methods and procedures
+that need to reference the current ConfigTree object will have it as
+argument.
+
+C<RRFW::ConfigTree::new()> will have a mandatory argument "TreeName".
+
+=head2 Database structure
+
+All datasource trees will share one BerkeleyDB environment. The
+BDB environment home directory will stay the same, defined by I<dbhome>
+config variable.
+
+For each tree, the database files will be placed in a separate subdirectory
+of a subdirectory of I<dbhome>.
+
+
+=head2 User interface
+
+All relevant command-line executables will support the following
+options:
+
+=over 4
+
+=item * --tree <tree_name>
+
+Specifies the datasource tree for processing;
+
+=item * --all
+
+If applicable, performs the operation on all available trees.
+
+=back
+
+When in verbose mode (B<--verbose>), the command-line programs must
+print the tree names they operate with.
+
+The web interface will take the PATH_INFO string as the tree name.
+For mod_perl handler, it will be also possible to prohibit
+PATH_INFO selection, and to configure the tree name in Apache
+configuration.
+
+When no PATH_INFO is given to the web interface handler,
+a special superlevel menu may be shown with the list of available trees.
+
+It will also be possible to specify tree-specific renderer attributes, like
+C<%RRFW::Renderer::styling>, C<$RRFW::Renderer::companyName>, etc.
+
+B<Plain CGI interface will not be supported> As Renderer gets more complex,
+CGI initialization time will increase. Also it will become harder to support
+two user interfaces with similar functionality.
+
+
+=head2 Daemons launch master
+
+There will be a master process that will launch collector and monitor
+daemons for each tree. It will be configurable from a separate file,
+specifying the daemons and execution parameters for each tree.
+
+The master process will watch the child processes and issue warnings in the
+events of child process termination.
+
+Stopping the master process will stop all child daemons gracefully.
+
+
+=head1 Separate database for non-datasource objects
+
+In RRFW version 0.0.X, all the parameters for datasources, views,
+monitors, and tokensets are stored in F<configuration.db> database.
+
+As proposed by Christian Schnidrig, storing all non-datasource
+objects information in a separate database would improve the scalability.
+
+In RRFW version 0.1.X, datasource parameters will be stored in
+F<ds_config.db>, and all other object's parameters in F<other_config.db>.
+
+The XML compiler will have a new option, B<--nods>, which disables
+processing of E<lt>datasourcesE<gt> elements in the input XML files.
+
+In addition to C<ConfigurationReady> flag, there will be a flag that indicates
+the readiness of datasource tree only.
+
+All these measures will allow faster administration and testing of
+non-datasource objects, and will prevent the collector from unneeded
+interruptions.
+
+
+=head1 User privileges
+
+User privileges will apply to the tree level: across one datasource tree
+a given user will have uniform privileges.
+
+Each user belongs to one or more groups. Privileges are assigned to
+groups only, not to individual users. Groups are one-level deep: they
+consist of users only. Probably in the future groups will consist
+of groups too.
+
+In the beginning, only one privilege will be implemented: I<DisplayTree>.
+The design should be flexible enough to add more privileges in the future.
+Examples: I<GenerateReport>, I<Debug>, I<ScheduleTask>, and so on.
+
+Privileges maintenance interface will include a command-line utility.
+In the future, a web interface is also possible. In this case, a new
+privilege will be added: I<EditPrivileges>.
+
+Privileges editor will include the following functions:
+
+=over 4
+
+=item * add/delete group
+
+=item * add/delete user
+
+=item * change user password
+
+=item * add/delete user membership in a group
+
+=item * edit privileges for groups and trees
+
+=item * list group members
+
+=item * list groups a user belongs to
+
+=item * list privileges for a given group or user
+
+=item * list privileges and groups (or users) for a given tree
+
+=item * export/import the privileges database to/from XML
+
+=back
+
+Privileges logics implementation must be separate from the database backend.
+At first, BerkeleyDB backend will be supported. In the future, LDAP
+backend is possible.
+
+=head1 Author
+
+Copyright (c) 2003 Stanislav Sinyagin ssinyagin@yahoo.com
diff --git a/torrus/doc/devdoc/torrus_roadmap.pod b/torrus/doc/devdoc/torrus_roadmap.pod
new file mode 100644
index 000000000..85698f2c8
--- /dev/null
+++ b/torrus/doc/devdoc/torrus_roadmap.pod
@@ -0,0 +1,249 @@
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+# $Id: torrus_roadmap.pod,v 1.1 2010-12-27 00:04:36 ivan Exp $
+# Stanislav Sinyagin <ssinyagin@yahoo.com>
+#
+
+=head1 RRFW to Torrus transition roadmap
+
+=head2 Introduction
+
+The name "RRFW" appeared to be quite difficult to remember and to pronounce.
+There has been a call for a new name, and recently a good suggestion came
+from Francois Mikus:
+
+ --- Francois Mikus <fmikus[at]acktomic.com> wrote:
+ > Here is my humble flash, which I think may be appropriate. Which I will
+ > explain why below...
+ >
+ > The name I would suggest is;
+ >
+ > Torrus
+ >
+ > Has a mythical sounding name without the actual history. Has a resonance
+ > with Torrent, where rrfw deals with a torrent of information. A google
+ > search comes up with near nothing, and nothing commercial. Has a
+ > resonance with Taurus, which is mythical, astrological and has an
+ > underlying strength connotation.
+ >
+ > Anyway, this is the best I could think of. And it provides an opening to
+ > have a semi-mythical/comic style yet serious mascot.
+ >
+ > You have a LOT of documentation. web pages, code, etc.. But marketing is
+ > the way to win hearts and minds, create a following and get rabid
+ > developpers on-board!
+
+Thus the project will be renamed to Torrus, and few other structural changes
+will accompany the transition.
+
+=head2 Releases roadmap
+
+Version 0.1.8 will be the last of RRFW, unless some urgencies arise.
+
+The first Torrus release will be 1.0.0.
+
+
+
+=head2 Multiple XML cofiguration directories
+
+During XML compilation, the datasource configuration files will be searched in
+multiple directories. The list of directories and the search sequence
+will be configurable. This will allow not to mix the distribution XML files
+and the ones created locally.
+
+=head2 Separated directories for templates and configuration
+
+Perl configuration files and HTML templates will also be separated into
+different directories, so that user-editable files don't mix with the
+ones from distribution.
+
+=head2 Commandline launcher
+
+A small shell script will be installed as C</usr/local/bin/torrus>,
+and it will pass all arguments to appropriate torrus executables. For example,
+
+ torrus compile --tree=main
+
+will execute C<compilexml> torrus utility with the argument C<--tree=main>.
+
+
+
+=head2 New directory hierarchy
+
+Filesystem Hierarchy Standard E<lt>http://www.pathname.com/fhs/E<gt>
+proposes to put the software add-on packages into C</opt> directory
+and user services data, such as database contents or RRD files, in
+C</srv> directory.
+
+However, FreeBSD and some other systems are not FHS-compliant, and require
+to install all additional software into C</usr/local> hierarchy.
+
+We propose that Torrus distribution will support three different directory
+layouts, and the system administrator will decide the most suitable one:
+
+=over 4
+
+=item 1
+
+Default layout based in C</usr/local>;
+
+=item 2
+
+FHS compliant layout, set by running C<./configure_fhs> instead
+of C<./configure>;
+
+=item 3
+
+Custom layout, tunable with standard options and variables in C<./configure>.
+
+=back
+
+
+=head3 Default layout
+
+Although many systems like FreeBSD discourage creation of new
+package-specific subdirectories in /usr/local, we find it quite a common
+practice, and quite convenient for keeping the files together.
+
+ /usr/local/torrus/ Home directory for Torrus distribution files
+ |
+ +- conf_defaults/ torrus-config.pl and others
+ |
+ +- bin/ Command-line executables
+ |
+ +- doc/ POD and TXT documentation files
+ |
+ +- examples/ Miscelaneous example files
+ |
+ +- perllib/ Perl libraries
+ |
+ +- plugins/ Plugins configuration
+ |
+ +- scripts/ Scripts
+ |
+ +- sup/ Supplementary files, DTDs, MIBs, color schemas,
+ | Web plain files
+ |
+ +- templates/ Renderer output templates
+ |
+ +- xmlconfig/ Distrubution XML files
+
+ /usr/local/etc/torrus/ Site configurable files
+ |
+ +- conf/ Place for torrus-siteconfig.pl and other siteconfigs
+ |
+ +- discovery/ Devdiscover input files
+ |
+ +- templates/ User-defined Renderer output templates
+ |
+ +- xmlconfig/ User XML configuration files
+
+ /usr/local/man/ Place for man pages. All articles will have the
+ prefix C<torrus_>
+
+ /var/log/torrus/ Daemon logfiles
+
+ /var/run/torrus/ Daemon PID files
+
+ /var/torrus/cache/ Renderer cache
+
+ /var/torrus/db/ Configuration databases
+
+ /var/torrus/session_data/ Web interface session files
+
+ /srv/torrus/collector_rrd/ Default directory for collector
+ generated RRD files
+
+
+=head3 FHS compliant layout
+
+ /opt/torrus/ Home directory for Torrus distribution files
+ |
+ +- conf_defaults/ torrus-config.pl and others
+ |
+ +- bin/ Command-line executables
+ |
+ +- doc/ POD and TXT documentation files
+ |
+ +- examples/ Miscelaneous example files
+ |
+ +- perllib/ Perl libraries
+ |
+ +- plugins/ Plugins configuration
+ |
+ +- scripts/ Scripts
+ |
+ +- sup/ Supplementary files, DTDs, MIBs, color schemas
+ |
+ +- templates/ Renderer output templates
+ |
+ +- xmlconfig/ Distrubution XML files
+
+ /etc/opt/torrus/ Site configurable files
+ |
+ +- conf/ Place for torrus-siteconfig.pl and other siteconfigs
+ |
+ +- discovery/ Devdiscover input files
+ |
+ +- xmlconfig/ User XML configuration files
+
+ /opt/torrus/share/man/ Place for man pages. All articles will have the
+ prefix C<torrus_>
+
+ /var/log/torrus/ Daemon logfiles
+
+ /var/run/torrus/ Daemon PID files
+
+ /var/torrus/cache/ Renderer cache
+
+ /var/torrus/session_data/ Web interface session files
+
+ /srv/torrus/db/ Configuration databases
+
+ /srv/torrus/collector_rrd/ Default directory for collector
+ generated RRD files
+
+
+=head2 New plugins design
+
+Unlike RRFW, the plugins in Torrus will be installed independently.
+This will allow to easily add new plugins to an existing installation.
+
+The Torrus installer stores all important variable settings in a special
+file, F<conf_defaults/instvars>. Then the plugin installer is able
+to access the settings without accessing the Torrus distribution
+directory.
+
+There is a helper utility, C<install_plugin>, which applies all
+I<configure> variables to the plugin configuration utility.
+It follows then the standard installation way:
+
+ ./configure && make && make install
+
+Thus the OS-dependent package installators may follow the standard
+configuration procedure, while those who manually install the software,
+will use the helper.
+
+There are two special directories: F</usr/local/torrus/plugins/torrus-config>
+and F</usr/local/torrus/plugins/devdiscover-config>. Plugins are
+allowed to add Perl files there. They will be automatically I<require>'d by
+F<torrus-config.pl> and F<devdiscover-config.pl>.
+
+
+
+=head2 Authors
+
+Copyright (c) 2004 Stanislav Sinyagin
diff --git a/torrus/doc/devdoc/wd.distributed.pod b/torrus/doc/devdoc/wd.distributed.pod
new file mode 100644
index 000000000..8dae04915
--- /dev/null
+++ b/torrus/doc/devdoc/wd.distributed.pod
@@ -0,0 +1,198 @@
+# Copyright (C) 2002 Stanislav Sinyagin
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+# $Id: wd.distributed.pod,v 1.1 2010-12-27 00:04:36 ivan Exp $
+# Stanislav Sinyagin <ssinyagin@yahoo.com>
+#
+#
+
+=head1 RRFW Working Draft: Distributed collector architecture
+
+Status: pending implementation.
+Date: May 26, 2004. Last revised: June 14, 2004
+
+=head2 Introduction
+
+In large installations, one server has often not enough capacity
+to collect the data from all the data sources. In other cases,
+because of the network bandwidth or security restrictions it is
+preferrable to collect (SNMP) data locally on the site, and transfer
+the updates to the central location less frequently.
+
+=head2 Terminology
+
+We call I<Hub> servers those which run the user web interfaces and
+optionally threshold monitors. These are normally placed in the central
+location or NOC datacenter.
+
+I<Spoke> servers are those running SNMP or other data collectors.
+They periodically transfer the data to Hub servers. One Spoke
+server may send copies of data to several Hub servers, and one
+Hub server may receive data from many Spoke servers.
+
+In general, the property of being a Hub or a Spoke is local to a pair
+of servers and their datasource trees, and it only describes the functions
+of data collection and transfer. In complex installations, the same
+instance of RRFW may function as a Hub for some remote Spokes, and as a
+Spoke for some other Hubs simultaneousely.
+
+We call I<Association> a set of attributes that describe a single connection
+between Hub and Spoke servers. These attributes are:
+
+=over 4
+
+=item * Association ID
+
+Unique symbolic name across the whole range of interconnected servers.
+
+=item * Hub server ID, Spoke server ID
+
+Names of the servers, usually hostnames.
+
+=item * Transport type
+
+One of SSH, RSH, HTTP, etc.
+
+=item * Transport mode
+
+PUSH or PULL
+
+=item * Transport parameters
+
+Parameters needed for this transport connection, like login name, password,
+URL, etc.
+
+=item * Compression type and level
+
+Optional, gzip or bzip2 or something else, with compression levels from 1 to 9.
+
+=item * Tree name on Hub server
+
+Target datasource tree that will receive data from Spokes
+
+=item * Subtree path on Hub server
+
+The data updates from this association will be placed in a subtree
+under the specified path.
+
+=item * Tree name on Spoke server
+
+The tree where a collector runs and stores data into this association.
+
+=item * Path translation rules
+
+Datasource paths from Spoke server may be changed to look different
+in the tree of Hub server.
+
+=back
+
+
+=head2 Transport
+
+The modular architecture design should allow different types of data
+transfer. The default transport is Secure Shell version 2 (SSH). Other
+possible transports may be RSH, HTTP/HTTPS, rsync.
+
+Two transport modes should be implemented: PUSH and PULL.
+In PUSH mode, Spoke servers initiate the data transfer and push the data to
+Hub servers. In PULL mode, Hub servers initiate the data
+transfer and ask Spokes for data updates. It should be possible
+to mix the transport modes for different Associations on the same
+server, but within each Association the mode should be strictly
+determined. The choice of transport mode should be based on local security
+policies, and server and network performance.
+
+Optionally the compression method and level can be configured. Although
+SSH protocol supports its own compression, more aggressive compression
+methods may be used for the sake of better bandwidth usage.
+
+Transport agents should notify the operator in cases of delivery failures.
+
+=head2 Operation
+
+For Spoke servers, distributed data transfer will be implemented as
+additional storage type. For Hub servers, this will be a new collector
+type.
+
+Each data transfer is a concatenation of I<messages>. Messages
+may be of one of two types: I<CONFIG> and I<DATA>. Spoke server generates
+the messages and stores them for the transfer. Messages are delivered
+to Hub servers with a certain delay, but they are guaranteed to
+arrive in sequential order. For each pair of servers, messages are
+consecutively numbered. These numbers are used for failure detection.
+
+A Spoke server keeps track of its configuration, and after each
+configuration change, it sends a CONFIG message. This message contains
+information about mapping between Spoke server tokens and datasource paths,
+and a limited set of parameters for displaying and monitoring the data.
+
+After each collector cycle, Spoke server sends DATA messages.
+These messages contain the following information: timestamp of the
+update, token, and value. The format of the message should be designed
+to consume minimum bandwidth.
+
+Hub server picks up the messages delivered by the transport agents.
+Upon receiving a CONFIG message, it sets a preconfigured delay, in order
+to collect as many as possible CONFIG messages. Then the data transfer agent
+generates a new XML configuration based on the messages, and starts
+the compilation of configuration. The DATA messages are queued for the
+collector to pick up and and store the values. It must be ensured that
+all DATA messages queued for the old configuration are processed before
+the compilation starts.
+
+In case of fatal failure and loss of data, Hub server ignores all DATA
+messages until it gets a new CONFIG message. A periodic configuration update
+schedule should be defined. If no configuration changes occur within a
+certain period of time, Spoke server periodically sends the CONFIG messages
+with the same timestamp.
+
+
+=head2 Message format
+
+Message is a text in email-like format: it starts with a header, followed by
+an empty line and the body. Single dot (.) in a line specifies the end of
+the message. Blocks within a CONFIG message are separated with semicolon (;),
+each block representing a single datasource leaf.
+
+Example:
+
+ MsgID:100001
+ Type:CONFIG
+ Timestamp:1085528682
+
+ level2-token:T0005
+ level2-path:/Routers/RTR1/Interface_Counters/Ethernet0/InOctets
+ vertical-label:bps
+ ....
+ ;
+ level2-token:T0006
+ level2-path:/Routers/RTR1/Interface_Counters/Ethernet0/OutOctets
+ vertical-label:bps
+ .
+ MsgID:100002
+ Type:DATA
+ Timestamp:1085528690
+
+ T0005:12345678
+ T0006:987654321
+ .
+
+
+
+
+=head1 Author
+
+Copyright (c) 2004 Stanislav Sinyagin E<lt>ssinyagin@yahoo.comE<gt>
diff --git a/torrus/doc/devdoc/wd.messaging.pod b/torrus/doc/devdoc/wd.messaging.pod
new file mode 100644
index 000000000..5d76e114d
--- /dev/null
+++ b/torrus/doc/devdoc/wd.messaging.pod
@@ -0,0 +1,128 @@
+# Copyright (C) 2002 Stanislav Sinyagin
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+# $Id: wd.messaging.pod,v 1.1 2010-12-27 00:04:36 ivan Exp $
+# Stanislav Sinyagin <ssinyagin@yahoo.com>
+#
+#
+
+=head1 RRFW Working Draft: Messaging subsystem
+
+Status: pending implementation.
+Date: Jun 30 2004. Last revised:
+
+=head2 Introduction
+
+Due to the modular and flexible architecture of RRFW, nothing prevents
+us from having the possibility of user messages displayed in RRFW pages.
+This design document describes the concept of this functionality.
+
+=head2 Description
+
+The messaging subsystem will allow the RRFW users to leave comments and
+short messages directly at the RRFW pages. Those may be remarks about the
+graph contents, troubleshooting journal, etc.
+
+Each user is uniquely identified by RRFW ACL susbsystem. We introduce several
+new attributes and privileges for messaging functionality. Privilege objects
+are the tree names.
+
+Attributes:
+
+=over 4
+
+=item * email
+
+The user's e-mail where the notifications will be sent
+
+=item * msgnotify
+
+When set to true value, e-mail notifications will be sent to this users.
+
+=back
+
+Privileges:
+
+=over 4
+
+=item * PostMessages
+
+allows the user to add messages to the tree objects.
+
+=item * DisplayMessages
+
+allows the user to see all messages for the tree
+
+=item * ReceiveNotifications
+
+allows the user to receive e-mail notifications. For those notifications
+generated by Messages, C<DisplayMessages> must be granted too.
+
+=item * DeleteMessages
+
+allows the user to delete messages from the tree objects
+
+=item * EditMessages
+
+allows the user to change any message
+
+=item * EditOwnMessages
+
+allows the user to change his/her own messages
+
+=back
+
+The C<acledit> program will have two additional options that simplify
+administration: C<--msguser> will grant all privileges except C<DeleteMessages>
+and C<EditMessages>, and C<--msgadmin> will grant all messaging privileges.
+
+The messaging options database will contain parameters that each user can tune
+for himself or herself:
+
+=over 4
+
+=item * Notify when
+
+a) any new message in all trees; b) (default) new message for
+objects that I commented only.
+
+=item * Notification format
+
+a) plain text (default); b) HTML; c) RSS 2.0
+
+=item * Subject line format
+
+The format pattern with keywords like C<$TREE>, C<$PATH>, C<$AUTHOR>,
+C<$MSGID>, etc.
+
+Default:
+
+ [rrfw $MSGID] $TREE $AUTHOR: $PATH
+
+=back
+
+Each message will have the status of Read/Unread per each user in the system.
+
+On the tree chooser page in RRFW Web interface, the user will be shown
+the unread messages.
+
+RRS 2.0 feed will be provided for messages export and for integration with
+other messaging systems.
+
+
+=head1 Author
+
+Copyright (c) 2004 Stanislav Sinyagin E<lt>ssinyagin@yahoo.comE<gt>
diff --git a/torrus/doc/devdoc/wd.monitor-escalation.pod b/torrus/doc/devdoc/wd.monitor-escalation.pod
new file mode 100644
index 000000000..3dc59796d
--- /dev/null
+++ b/torrus/doc/devdoc/wd.monitor-escalation.pod
@@ -0,0 +1,117 @@
+# Copyright (C) 2002 Stanislav Sinyagin
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+# $Id: wd.monitor-escalation.pod,v 1.1 2010-12-27 00:04:36 ivan Exp $
+# Stanislav Sinyagin <ssinyagin@yahoo.com>
+#
+#
+
+=head1 RRFW Working Draft: Monitor escalation levels
+
+Status: pending implementation.
+Date: Nov 5 2003. Last revised: Nov 10 2003
+
+=head2 Introduction
+
+The initial idea comes from Francois Mikus in Cricket development team.
+His proposal was to raise the alarm only after several true consecutive
+monitor conditions.
+
+The idea has developed into the concept of escalation levels.
+
+
+=head2 Monitor events
+
+Current implementation supports four types of monitor events: C<set>,
+C<repeat>, C<clear>, and C<forget>. New event type will be C<escalate(X)>.
+C<X> designates a symbolic name for a certain escalation level. Each level
+is associated with the escalation time interval.
+
+Given C<Te> as the escalation interval, C<Ta> as the monitor condition age,
+and C<P> as period, the escalation event will occur simultaneously with
+one of C<repeat> events, when the following condition is true:
+
+ Te >= Ta
+
+New event types C<clear(X)> and C<forget(X)> will occur at the same
+time as C<clear> and C<forget> respectively,
+for each escalated level.
+
+
+=head2 Monitor parameters
+
+New parameter will be introduced: C<escalation>. Value will
+be a comma-separated list of C<name=interval> parts, where C<name>
+designates the escalation level, and C<interval> specifies the escalation
+interval in seconds.
+
+Example:
+
+ <monitor name="rate-limits">
+ <param name="escalation value="Medium=1800, High=7200, Critical=14400" />
+ ...
+ </monitor>
+
+Another example would be Cisco TAC style priorities: P3, P2, P1.
+
+
+=head2 Action parameters
+
+C<launch-when> parameter will be valid not for C<exec> actions only, but also
+for C<tset> actions. New valid values will be C<escalate(X)>, C<clear(X)>,
+and C<forget(X)>.
+
+XML configuration validator will not verify if escalation levels in
+action definition match those in datasource configuration.
+
+New optional action parameter: C<allowed-time>. Contains an RPN expression
+which must be true at the time when the action is allowed to execute.
+Two new RPN functions may be used here: C<TOD> and C<DOW>.
+
+C<TOD> returns the current time of day as integer: C<HH*100+MM>. For example,
+830 means 8:30 AM, and 1945 means 7:45 PM.
+
+C<DOW> returns the current day of the week as integer between and including
+0 and 6, with 0 corresponding to Sunday, 1 to Monday, and 6 to Saturday.
+
+In this example, the action is allowed between 8 AM and 6 PM from Monday
+to Friday:
+
+ <param name="allowed-time">
+ TOD,800,GE, TOD,1800,LE, AND,
+ DOW,1,GE, AND,
+ DOW,5,LE, AND
+ </param>
+
+
+=head2 Implementation
+
+B<monitor_alarms.db> database format will change: The values will consist
+of five colon-separated fields. The first four fields will be as earilier,
+and the fifth one will be a comma-separated list of escalation level names
+that have already fired.
+
+The implementation of this feature is preferred after the planned redesign of
+the monitor daemon. The new monitor design would support individual
+schedule for each datasource leaf, analogous to collector schedules.
+
+In turn, the monitor daemon redesign is better to do after
+the collector daemon redesign. Then it would allow to keep similar design
+and architecture where possible.
+
+=head1 Author
+
+Copyright (c) 2003 Stanislav Sinyagin E<lt>ssinyagin@yahoo.comE<gt>
diff --git a/torrus/doc/devdoc/wd.uptime-mon.pod b/torrus/doc/devdoc/wd.uptime-mon.pod
new file mode 100644
index 000000000..8bc1c423e
--- /dev/null
+++ b/torrus/doc/devdoc/wd.uptime-mon.pod
@@ -0,0 +1,162 @@
+# Copyright (C) 2002 Stanislav Sinyagin
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
+
+# $Id: wd.uptime-mon.pod,v 1.1 2010-12-27 00:04:36 ivan Exp $
+# Stanislav Sinyagin <ssinyagin@yahoo.com>
+#
+#
+
+=head1 RRFW Working Draft: Service uptime monitoring and reporting
+
+Status: in pre-design phase.
+Date: Sep 26 2003; Last revised:
+
+=head2 Definitions
+
+It is often required to monitor the service level in networks.
+Service level is normally covered by Service Level Agreement (SLA),
+which defines the following parameters:
+
+=over 4
+
+=item * Service definition
+
+Describes the particular service in terms of functionality and means of
+monitoring. Examples are: IP VPN connectivity, WAN uplink, SQL database engine.
+
+=item * Maintenance window
+
+Describes the periodic time intervals when service outage is possible
+due to some maintenance work. It may be unconditional (outage is always
+possible within the window), or conditional (customer confirmation required
+for outage within the window). Notification period is normally defined
+for maintenance outages.
+Example: every 1st Tuesday of the month between 6AM and 8 AM, with 96 hours
+notification time.
+
+=item * Outage types
+
+Outages may be caused by: 1). system failure; 2). service provider's
+infrastructure failure; 3). customer activity.
+
+=item * Service level objectives
+
+These are the guarantees that the sevice provider gives to the customer.
+Violation of these guarantees is compensated by penalties defined.
+
+These may include: Maxium maintenance downtime per specified period;
+Maximum downtime period due to failures on the service provider side;
+Minimum service availability per specified period.
+
+=back
+
+
+=head2 Event datasource type
+
+In order to store the service level information, we need a new datasource
+type in RRFW: I<event>. It represents an atomic information
+about a single event in time, e.g. it canot be devided into more specific
+elements or sub-events. Its attributes are as follows:
+
+=over 4
+
+=item * Event group name
+
+Several events belong to one and only one group. Event group is a unique
+entity that describes the service.
+
+=item * Event name
+
+Unique name within the event group. Describes the type of the event, such as
+C<maintenance>, C<downtime>. Events with the same names cannot overlap in
+time.
+
+=item * Start time
+
+Timestamp of the event start.
+
+=item * Duration
+
+Positive integer that specifies the length of the event in seconds.
+Zero duration means that the event has not yet finished.
+
+=item * Parameters
+
+Event-specific I<(name, value)> pairs.
+
+=back
+
+Events are uniquely identified by I<(Event group, Event name, Start time)>
+triple.
+
+
+=head2 Event summary reports
+
+Renderer should be able to display the events at different summary levels
+and in different combinations. Event reports should be specified by
+expressions, as follows:
+
+=over 4
+
+=item * Boolean operators
+
+C<downtime AND NOT maintenance>.
+
+=item * Time period
+
+C<(downtime AND NOT maintenance)[-2DAYS,NOW]>
+
+C<(downtime[-2DAYS,NOW] AND NOT maintenance AND
+NOT downtime[200309151200,200309151300])>
+
+=item * Arithmetic operations
+
+Sum of durations, substract of durations...
+
+=back
+
+=head2 Events generation
+
+Events may be generated by the following sources:
+
+=over 4
+
+=item * Collector
+
+SNMP collector may create events on some faulty conditions, like host
+unreachable, or on SNMP variables change, like interface status.
+Also it's possible to create an ICMP Echo collector type,
+which would generate events based on pinging the hosts.
+
+=item * Monitor
+
+Obviously, a new monitor action will be to create events.
+
+=item * Human operator
+
+First from commandline interface, and later from thr Web interface,
+the human operators may create the scheduled events, like maintenance
+outages. Security policy should protect certain types of events
+from human intervention.
+
+=back
+
+
+
+
+=head1 Author
+
+Copyright (c) 2003 Stanislav Sinyagin E<lt>ssinyagin@yahoo.comE<gt>