Qube 4 PRD

From CobaltFAQs

Jump to: navigation, search

Last Updated: $Id: spec.txt,v 1.59 2001/04/09 23:02:52 kevin Exp $

                       PESCADERO SPECIFICATION


Pescadero is the 4th generation Cobalt Qube. In essence, it is an
effort that brings the Qube "out of the cloud" and "close to users".
"Out of the cloud" being the differentiation between Cobalt Qubes and
yet another server on the Internet. Cobalt Qubes are not attempting to
replicate functionality of Internet servers. Instead, they should
provide unique features that focus on the Intranet. "Close to users"
being the dependence on the Qubes by users. In other words, Qubes fail
if users do not care about the existence of them. Qubes need to be an
in-your-face product.

To realize the vision, there are 4 things Pescadero needs to achieve.
- Pescadero needs to be an application and data repository by
  relocating applications and data from fat clients to it.
- Pescadero needs to be universally interoperable because its
  usefulness depends on the number of things connected to it.
- Pescadero needs to be a weapon for SPs. SPs are the people who can
  push Pescadero close to users.
- Pescadero needs to be a platform for developers because they are the
  people who implement this vision.


Below is a list of features to be included in Pescadero along with a
description of the feature, the section in the PRD requesting the
feature, as well as an estimate of the engineering time in man-weeks
to implement the feature to Alpha quality.


Webmail  (PRD 3.1.11) -- 4wks

The current Webmail has performance and lack of feature issues.  There
are possibilities to keep and improve Cobalt's Webmail implementation
or adapt other implementations.  After researches, the idea of keeping
and improving Cobalt's Webmail is favoured.

After several tests, it has been found that the current performance
bottle neck to Webmail is the imap_sort call of PHP.  It is believed
that this call is either sorting imap messages itself in a high-order
complexity fashion, or that the underlying code in the imap c-client
is the cause of our problems.  By debugging this routine, we should be
able to properly fix any webmail performance issues.

Another way to improve performance is to switch to another IMAP server
implementation.  This is discussed in the IMAP section in this

There are two viable implementation options:

1) Abandon the Webmail code written for Carmel in favor of
   implementing an open source solution such as the IMP.
   - May prove to be faster under higher loads
   - Has all the features that would need to be implemented in our
     current code base.
   - Although imp is localized in multiple languages for text strings,
     message reading and composition, Japanese support is not
   - We would need to make the 'look & feel' for it.
   - We would to need to adapt the code to our i18n.
   - We would possibly have to play catch-up because we are dealing
     with a foreign code base, but this should not be too much of a

2) Continue with Webmail development and add the features as listed in
   Pescadero PRD appendix B

After reviewing IMP, it becomes evident that a switch to it would not
give the much needed performance increase we seek.  Otherwise, it
seems that they have organized their code in a very well laid out
fashion, and that any look and feel work would not be too difficult,
other than the code that takes care of any browser frame work, which
can be gotten from our existing codebase.  Any Japanese support can
also be taken from our current code base.  Any user data specified
(such as reply-to fields, signatures) will need to be stored in CCE.

The projected timeframe required to get the IMP working with the
Sausalito architecture looks to be much longer than that needed to get
the requested features requested in the Pescadero PRD Appendix B.  To
summarize the features that have been requested:

- Add buttons on each individual message screen to allow the user to
  jump directly to the next and previous message.  Allowing the user
  to jump to the next and previous unread message would also be
  helpful.  The order in which messages are read should be kept from
  the ordering that the user last looked at the directory listing.
  The index to use for each folder can be stored in a cookie on the
  user's client.
- As this indexing information is to be stored, another button should
  be available on each individual message screen to allow the user to
  jump back to his directory listing with all indexing and page
  numbers in place.
- The user should be able to change his reply to address.  This option
  will be available on a preference's page within webmail and stored
  in a User namespace called WebmailPrefs within CCE.  Only webmail
  messages will honor this reply-to setting.
- A review of email client language will be done.  This will insure
  that if another client calls removing a message 'delete', then we
  will too.  We will use Outlook Express as the basis of our language
- A new unremovable mailbox will be created called 'Trash'.  Whenever
  a user deletes a message from a folder other than 'Trash', a pop-up
  message will indicate to the user that his message will be moved to
  'Trash' and deleted after the set amount of time described below.  A
  call to imap_mail_move will be made that will mark the message as
  deleted and create a copy of it in trash.  A call to imap_expunge
  will then assure that the message is properly moved.
- A simple perl script should be run as a cron job once a day,
  deleting messages in User's trash that have been there for over
  either a) a set amount of time (eg: 30 days) or b) an admin
  adjustable amount of time.  The former option will be done first,
  and the latter implemented only if there is permitable time left.
- Nested folders will be implemented.  This has worked in the past,
  and should not be too troublesome to implement.  On the current
  folder admin page, a new button will be added to each folder already
  listed to allow the user to add a new sub-folder.  Simple
  modification to the foldergen.php page will need to be made in order
  to properly display these folders.
- On the same preferences page listed to allow users to change their
  reply-to address, an option will be made available allowing users to
  view image attachments inline.
- If time permits, a simple interface to email filtering will be made
  available.  This filter will allow users to match whether the
  message received was from a mailer or deamon by doing:
  - A simple match of 'Subject contains'
  - A simple match of 'Sender contains'
  - A simple match of 'Sent-to contains'
  These rules will not be stored in CCE as users may not create new
  objects.  They will be stored in the users' directory in a file
  called .procmailconf and an appropriate .procmailrc will be created.
  If the user opts for any filtering whatsoever, a .forward containing
  "|exec /usr/bin/procmail" will be made.  Any forwarding options
  available to the user will no longer be done using .forward, but
  with a recipe similar to:
    ! forward@tome.com
  In order to accomplish this goal of filtering, a script will be
  written and added to /etc/ccewrap.conf similar to that of the
  personal addressbook.
- A signature option will be made available in the prefs window and
  stored in the WebmailPrefs User namespace.  When composing, webmail
  will always start with the signature at the bottom of the message.
  CCE should also take care of placing this signature in the user's
  ~/.signature .  Notice that a .signature file cannot have any
  comments, so raw file work will have to be done in lieu of the file
  handling wrappers.
- Attachments will allow working with files stored on Cobalt machines.

BlueLinQ Security  (PRD 3.2.1) -- 3wks

Security needs to be tightened for BlueLinQ as having holes in such a
software installation mechanism can be disastrous. As with any
security solutions, sacrificed usability, high engineering cost and
high maintenance cost should be carefully considered.

Security improvements for BlueLinQ are as follows:
- Provide privacy, authenticity and integrity for PKG downloads.
- Provide privacy and integrity for packing lists downloads.

To provide privacy and integrity for PKG downloads, PKG download
servers support SSL and PKGs are referenced by https:// instead of
http://. This requires an upgrade of wget to version 1.5.3gold which
uses openssl to handle https:// requests.

With PKG signing, authenticity and integrity of PKGs are guaranteed.
openssl is the underlying tool to sign files and verify signatures. In
order to verify signatures, X.509 certificates must be present on
BlueLinQ clients. By default, a Sun Cobalt certificate is installed on
BlueLinQ clients, so Sun Cobalt signed PKGs can be verified.
Certificates from trusted signing authorities are also pre-installed.
Besides Sun Cobalt and trusted signing authorities, PKGs can be signed
by other private keys as well. In order to do that, the associated
certificates from PKG packagers must be available on BlueLinQ
clients. When a signed PKG is downloaded, the BlueLinQ client attempts
to find a certificate from the certificate repository on the client
itself to verify the signature. If that fails, it attempts to download
certificates specified by URLs within the packing list of the PKG and
use them to verify the signature. The process is sequential.  The
packing list is scanned from top to bottom for certificate URLs. For
each of these URLs, the certificate is downloaded and checked to see
if it is trusted by verifying its signature using the pre-installed
signing authority certificates. The trusted certificate is used to
verify the signature of the PKG. If verification succeed, the
certificate is installed to the certificate repository and the PKG
installation process continues. If any of the download, certificate
verification or PKG verification steps fail, the next certificate URL
in the packing list is examined. If the PKG cannot be verified by any
certificates, it is rejected and users are prompted.

The package details page shows if a package is signed. This helps
users to decide if they want to install the package or not. For signed
packages, information of the signer is displayed during the
installation process. This happens after the package is downloaded. A
page shows up with the signers information extracted from the
certificate that can be used to verify the signature. Users can
continue or cancel the installation process based on that information.

To simplify PKG signature handling, including signing and
verification, a PKG signer program is needed. To sign a PKG, it takes
a non-signed PKG and a private key to generate a signed PKG. To verify
a signed PKG, it takes a signed PKG and a certificate or certificate
directory to return true or false. This PKG signer program uses
openssl to handle encryption/decryption needs.

Active Monitor  (PRD 3.2.11) -- 3wks

Active Monitor needs to monitor the new services that are being
introduced with Pescadero.

For each new service (e.g. IMAPS), a new namespace to CCE
ActiveMonitor object needs to be added to instruct swatch to perform
periodic tests.  Also, a user interface page to show the detailed
status of the service needs to be added if necessary.


USB (PRD 3.1.4) -- 2wks developer level

USB support has two levels.  The bottom or developer level supports
developers who need development tools for USB.  The top or user level
supports users who need to use USB devices with Cobalt machines.

Developer level support is relatively easy.  It needs kernel support
and device drivers which Linux 2.4 provides.  User level support is
much harder.  To say Cobalt machines support a USB device, substantial
testing must be done.  Given the number of USB devices available on
the market, this is very non-trivial.  To complicate things more,
testing is an ongoing effort before there will be new devices in the

The objective of developer level support is to include as much device
drivers on 2.4 kernel as possible.  The objective for the user level
is to support mass storage devices and printers because they
complement Pescadero features nicely.  Mass storage support can
replace external SCSI while printer support allows shared printers
over the network.

To have developer level support, Linux 2.4 kernel needs to be
configured to support USB.  Cobalt machines (i.e. Carmel, Monterey)
uses Open Host Controller Interface (OHCI) instead of Universal Host
Controller Interface (UHCI), so OHCI support needs to be supported by
the kernel.

The USB device filesystem should also be supported.

There are several USB device class drivers supported by Linux.  They
include Human Interface Devices (HID), scanners, audio, modems,
printers, serial converters, CPiA cameras, IBM (Xirlink) C-it cameras,
OV511 cameras, Kodak DC-2xx cameras, mass storage, USS720 parports,
DABUSB, PLUSB Prolific USB-networks.  All of these drivers should be
compiled as kernel modules, so they can be loaded on-demand.

At the user level, to support different USB devices, hotplug should be
compiled into the kernel.  This allows the system to immediately see
and use the USB devices when they are plugged in.

Linux USB site
Working device list

Print Spooling  (PRD 3.1.4) -- 2wks Network, 3wks USB

Pescadero should support both remote printer spooling and print
spooling to a local USB connected printer.  Print spooling allows
Windows and Macintosh machines to queue print jobs to a remote printer
without waiting for the print job to finish, and to share a USB
printer among many networked users.

- No filters will be installed.  Filters provide access to
  paper/media/bin selection, printer accounting and page counting
  features.  However, all normal print functionality remains available
  from the local machine
- Advanced features like printer accounting, time controls and user
  restrictions will not be implemented.  In other words, any valid
  user on the box can print to any printer on the box any amount of
  pages at any time.
- Windows and Macintosh drivers will not be provided.  It is up to the
  user to provide these drivers for the machines when they set up the

Required Software
- Samba (http://www.samba.org)
- Netatalk
- LPRng (http://www.lprng.com)

A UI to add and remove printers must be created.  An administrator
should be able to enter a printer name and the IP address or the local
device and have the printer automatically set up.  Additionally, for
netatalk support, the adminstrator must provide a PPD file for the
printer.  These files are most often available either on the OS, or
directly from the printer manufacturer.

A UI to administer print jobs should be created as well.  While most
basic tasks such as removing one's own print jobs or viewing the queue
can be performed from the client computer, removing a different users
job, or clearing the queue can only be performed from the print server
itself.  This UI should provide the administrator with a list of
printers on the machine.  When the printer is chose, a list of the
currently running jobs will be displayed, with a select list of jobs
to be deleted.  Finally, the printer should have a "Suspend" check box
that temporarily stops all printing to the printer.

LPRng provides the print spooling functionality.  The Linux kernel
itself provides connectivity to the USB printers.  Samba provides the
interface between Windows machines and the print spool, and Netatalk
mediates between Macintosh's and the print spool.

Samba and Netatalk are already installed.  A RPM for LPRng must be

When the administrator adds a printer via the UI, the first step is to
create the spool in /etc/printcap.  The printcap file specifies all
printers that the machine is aware of, remote and locally attached.
See printcap(5) for more information on the formatting of this file.
Note that because we do not need to support advanced features such as
accounting or local printing, we don't need to use filters.  Filters
are programs that parse and modify the print jobs, for example to
translate from PCL to postscript, or count the number of pages being

Here is a sample printcap entry for a locally connected printer named

A network printer would look like this:

The %P is replaced with the name of the print spool, in this case

LPRng provides built in functionality to suspend an entire printer 
or an individual job.

  will suspend an individual job.

  will suspend all printing on a particular printer.

After the print queue has been set up, samba must be set up to allow
windows users to print to it.  The /etc/smb.conf file provides all
configuration information for samba, see smb.conf(5) for more info.
Additionaly the $SMB_DOCS/docs/textdocs/Printing.txt file provides
information on debugging printing issues.

The smb.conf file can have a special section called [printers].  This
section works like [home], but for printers. Typically the path
specified would be that of a world-writeable spool directory with the
sticky bit set on it. A typical [printers] entry would look like this:

        path = /var/spool/lpd/samba
        #  ---  do not use the Samba default path = /tmp
        print ok = yes
        printing = lprng
        load printers = yes
        guest ok = no
        printcap file = /etc/printcap
        print command =      /usr/bin/lpr  -P%p -r %s
        lpq command   =      /usr/bin/lpq  -P%p
        lprm command  =      /usr/bin/lprm -P%p %j
        lppause command =    /usr/sbin/lpc hold %p %j
        lpresume command =   /usr/sbin/lpc release %p %j
        queuepause command = /usr/sbin/lpc  -P%p stop
        queueresume command = /usr/sbin/lpc -P%p start

This would parse the printcap file and make all printers in it
available to all users.  Paths might need to be changed to reflect
correct installation paths.

Samba will make a copy of the files to be printed in the directory
specified by path. If the print operation fails then sometimes the
print file is left in the directory. The directory should be examined
periodically and files older then a day should be removed. The
following command can be used to do this, and should be put in a file
that is periodically (once a day) executed by the cron facility:

find /var/spool/lpd/samba -type f -mtime 2d -exec rm -f {} \;

See http://www.lprng.com/LPRng-HOWTO.html#SMB for more information.

Unlike samba, each printer must be individually listed in the netatalk
configuration file.  Additionally, netatalk requires the PPD files to
be on the linux machine.  Here are some samples:

Your 32 Character Printer Name:\
            :pr=|/your/path/to/lpr -Pprintername:\
    Student Printers:\
            :pr=|/usr/bin/lpr -Pstudent:\
    HP 2500c:\
            :pr=|/usr/bin/lpr -Php2500c:\

Multilingual DNS  (PRD 3.1.2) -- 1wk

Multilingual DNS support is a high priority request from many asian
Cobalt customers. Even though it is not a solid standard yet, Cobalt
machines should support it. Its implementation should be flexible
enough because the standard can change. The following languages are
- Japanese
- Chinese
- Korean

Currently, only these non-english languages can be registered at
Network Information Centers (NIC). In the future, more languages are
expected to be supported by NICs. Internic, Japan Network Information
Center (JPNIC), VeriSign, etc. are NICs that support multilingual DNS.

Like NICs, Cobalt machines support multilingual DNS with A and PTR
records only, not MX and CNAME records. This is because current target
usage is web browsing. The way multilingual DNS works is that web
browsers have plug-in software to encode domain names into a special
ASCII compatible encoding. They then use this encoded domain to query
DNS servers. Mail Transfer Agents (MTA) like Sendmail currently do not
support 8-bit domain names or the special encoding.  This can change
in the future, though.

Row-based ASCII Compatible Encoding (RACE) is the special encoding
backed up VeriSign and JPNIC. It is not a standard approved by IETF.
Details of RACE are available at
Future encoding standard can change and Cobalt machines will have to
adjust if that happens.

The implementation of multilingual DNS reuses the same DNS class in
CCE. When domain names in DNS entries contain non-alphanumeric or
hyphen characters (see RFC1035), CCE handlers encode them into RACE
before writing them into DNS zone files. No change needs to be made to
named because RACE encoded domains are 7-bits.

Because multilingual domain names can be composed of pretty much any
byte sequence, there is no need to check domain name inputs by the
frontend (UI) or the backend (CCE).


IMAP -- 2wks

Due to performance problems associated with UWash Imap server, it is
suggested to change our imap server over to use Cyrus Imapd along with
a maildir based mailbox format (See:
http://slashdot.org/askslashdot/01/01/27/0138202.shtml).  Licensing
will have to be checked, but freshmeat marks it as OSI approved.

SMTPS -- 1wks

SMTPS is SMTP working over SSL.  It allows email traffic between a
user's mail client and the Mail Transfer Agent (MTA) to use
strong encryption.  The MTA is the program that moves mail from one
machine to another.  This great enhances the security of outgoing email,
which would otherwise be transmitted in clear text.  It also encryptes
the username/password sent to the server.  Note, that SMTP traffic
between the sender's MTA and the recipients MTA is NOT encrypted; only
from client to server.

To use SMTPS, users must use email client programs which support
the protocol.  The only client which I could get to talk POP3S
was MS Outlook.

User interface to enable/disable SMTPS is needed. It does not need to
be any more complicated than that.

The standard port SMTPS binds to is 465.

/etc/services entry:
smtps		465/tcp			# SMTP over SSL

The best way for us to implement SMTPS will be to run Stunnel as
a daemon, bound to port 465.  All traffic to that port will
be decrytped and forwarded to port 25 (SMTP) on the localhost.
This will require OpenSSL, and a vaild certificate.

cd /usr/local/ssl/certs
/usr/local/sbin/stunnel -f -d smtps -r smtp

This will require a standard init script to be included, such as:

Stunnel		<www.stunnel.org>
Openssl 	<www.openssl.org>

POP3S -- 0.5wks

POP3S is POP3 working over SSL.  It allows email traffic between
a user's mail client and the POP3 server to use strong encryption.
This greatly enhances the security of reading mail, especially
from a remote location such as from home or while traveling.
Note this has no effect on outgoing email traffic whatsoever.

User interface to enable/disable POP3S is needed. It does not need to
be any more complicated than that.

The standard port SMTPS binds to is 995.

/etc/services entry:
pop3s		995/tcp			# POP-3 over SSL

The best way for us to implement POP3S will be to run Stunnel as
a daemon, bound to port 995.  All traffic to that port will
be decrytped and forwarded to port 110 (POP3) on the localhost.
This will require OpenSSL, and a vaild certificate.

cd /usr/local/ssl/certs
/usr/local/sbin/stunnel -f -d pop3s -r pop3

This will require a standard init script to be included, such as:

Stunnel		<www.stunnel.org>
Openssl 	<www.openssl.org>

IMAPS -- 0.5wks

IMAPS is bascially IMAP working over SSL, just like HTTPS. It allows
email traffic between IMAPS clients and IMAPS servers encrypted. This
greatly enhances security of email, which would otherwise be
transmitted in clear text. It also encrypts the username/password sent
to the server. Note that when users send email, the network traffic
is unchanged with IMAPS.

To use IMAPS, users must use email client programs which support
IMAPS.  Many do, including MS Outlook, Mozilla, Netscape Messenger,
and pine.  Eudora does not support IMAPS at the moment.

User interface to enable/disable IMAPS is needed. It does not need to
be any more complicated than that.

The standard port IMAPS binds to is 993.

/etc/services entry:
imaps 993/tcp     # imap4 protocol over TLS/SSL

The two most popular implementations for IMAPS are stunnel and
sslwrap.  Both require OpenSSL, an existing IMAP server and a valid
SSL certificate.  A self-signed SSL certificate will do. Their
implementation approaches are very similar.  Stunnel will be the 
preferred implementation.

stunnel is a program that allows users to encrypt TCP connections over
SSL.  It is designed to secure non-SSL aware daemons and protocols
like POP3, IMAP, and LDAP.  It can be run out of inetd. The
configuration in /etc/inetd.conf is like:

Stunnel should be run as a daemon. This gives better reliability,
speed, and security, so this is preferred over the inet
way. The configuration is like:

cd /usr/local/ssl/certs/
/usr/local/sbin/stunnel -f -d imaps -r imap

(-f means stay in foreground, and log to STDOUT instead of syslog)

This will require a standard init script to be included, such as:

- APOP (encrypted password for POP3)
- Webmail through Apache with SSL.
- IPSec

Stunnel		<www.stunnel.org>
sslwrap		<www.rickk.com/sslwrap>
Openssl 	<www.openssl.org>

SSH  (PRD 3.2.6) -- 1wk
Pescadero will include Secure Shell (SSH) client and server
applications.  SSH is a suite of network tools allowing all shell
& file transfer traffic to traverse the network encrypted,
provided that both hosts support the protocol.  All ssh traffic is
encrypted, including usernames & passwords.  SSH uses strong
authentication, and requires the OpenSSL libraries be installed.

SSH Server - This is a typical UNIX style daemon that listens for
connections on port 22, and authenticates both username/passwords
and the server's identity.  By default, after authentication sshd
drops the user to their default shell; but it can also be used to
run remote commands securely.  In effect this can replace insecure
programs like rsh, rlogin, telnet, rcp, and rsync.

SSH Client - The group of commonly used ssh client programs include:
ssh - client program for securely logging into a remote machine.
scp - scecurely copies files to/from a remote machine.
ssh-keygen - used to create Public keys (RSA or DSA) for users or hosts

Pescadero's UI will have only an On/Off checkbox for the sshd
server.  The Qube will not try to support the myriad of
configurations in the UI.  They are easy to follow in the config
file (sshd_config) and should stay there. Adding some useful
comments to our default sshd_config, explaining various options
might be helpful for the budding sysadmin.  SSH client programs 
are only useful at the command line, and will not be mentioned in
the UI.  A short section in the user's manual outlining their
functionality and importance would be useful.

The standard SSH implementation for Linux is OpenSSH.  It
requires the OpenSSL libraries be installed already.  OpenSSL has
a dependency on bc, which our current generation of products don't
install by default. Installation is very straightforward.  The
init.d script which starts sshd automatically generates the DSA &
RSA host_keys if they aren't already present.  We should be able
to use RedHat rpms with little to no changes.

OpenSSH			<www.openssh.com>
OpenSSL			<www.openssl.org>
bc			<www.gnu.org/software/bc>

Simple Firewall  (PRD 3.2.7) -- 3wks

Firewalling tools on Linux 2.4 kernel has changed from ipchains to
iptables.  To upgrade Cobalt machines to this kernel, firewall code
would need to change.

Pescadero should support port forwarding.  This feature enables the
qube to redirect traffic received at a specific IP address and port in
the to another address inside its own network.  An example would be to
redirect all incoming email to an email server, and so on.

Port Forwarding, as well as NAT and the firewall, are implemented in
2.4 kernels through the use of iptables, which is built on top of
netfilter.  Iptables is the user space program used to access
netfilter, the collection of binaries and kernel modules that make up
the firewalling tools in 2.4.  Iptables is the replacement for
ipchains in 2.2 kernels.

Like ipchains, iptables has three default chains (INPUT, FORWARD, and
OUTPUT), and the ability to create new chains.

INPUT - for packets wanting entrance to your system.
OUTPUT - for packets originating in your machine that want to go
FORWARD - for packets that want to use your system as a gateway in
          their journey from one machine to another.

One of the major differences here is that packet-filtering utilities
(iptables) are now separate and distinct from the packet-rewriting
utilities (ipnatctl)that allow NAT.  Both are part of the netfilter
package.  Those two services were both served by ipchains, in 2.2

You may filter packets according to many criteria:
- interface: the netfilter suite can distinguish between packets that
  arrive at your Ethernet card from those that arrive at your modem.
- source address: you can apply different sets of rules depending on
  the IP address of the sending machine.
- destination address: like the previous criterion, but in reverse.
- port number: you can distinguish packets heading to your Web server
  (port 80) from those heading to your FTP server (port 21).
- MAC address: every Ethernet card has a unique hardware address, and
  you can distinguish packets originating from certain individual
  Ethernet cards.
- IP protocol: you can distinguish between TCP, UDP, and ICMP packets,
  or anything else that's listed in /etc/protocols.
- packet flags: these are small parts of a packet, but there are ways,
  for instance, to distinguish a packet that's attempting to initiate
  a connection from one that's part of a pre-existing connection.

A UI should be created that will allow the admin user to add, remove,
and edit Port Forwarding rules.  When adding a Port Forwarding rule,
the user should be required to enter: protocol (tcp/udp), source port
(www,ftp,smtp, etc.), and the destination machine/ip.  Our
implementation will assume forwarding traffic from the secondary
interface to the primary.

Schema & Handlers
Each Port Forwarding rule will be stored in CCE with a PortFwd object.
When records are added, changed, or removed then CCE should rewrite
its rules and restart them.  The Port Forwarding rules could be
contained in an init script, along with the firewall rules.  Creating
the file /etc/rc.d/rc.firewall is a common solution.  Restarting them
consists of just running rc.firewall, because all rules are flushed
first, each time it is run.

A simple example, would be forwarding http traffic to port 80 on host
"webserver" (untested!).

/sbin/iptables -A FORWARD -p tcp -i eth0 -sport www \
-d webserver -o eth1

This specifies to append a rule to the FORWARD chain, that for port 80
traffic on eth0, make the destination webserver:80, through eth1.

And finally, turn on IP forwarding:
echo 1 > /proc/sys/net/ipv4/ip_forward

Notice, code already sits at cvsraq:portforward.mod

netfilter/iptables		<netfilter.kernelnotes.org>
ipchanins rc.firewall		<freshmeat.net/projects/rcf>
iptables man page

VPN  (PRD 3.1.3) -- 3wks pptp, 4wks IPSEC 

PPTP -- 3wks

PPTP is a technology that provides VPN solution for Windows
95/98/NT/2000 clients via PPP connections. Through PPTP, private
networks behind Cobalt machines on the Internet can be accessed by
authenticated users.

Required Software
- PoPToP
  PoPToP is the PPTP server software for Linux. More information at
- MSCHAPv2 and MPPE patches for PPPD
  MPPE is for Microsoft encryption (MPPE). MSCHAPv2 is for

Cobalt machines need new user interface as well as new CCE classes for

To provide a simpler user experience and reduce engineering
complexity, PPTP is a system-wide service for all users. There is no
enabling/disabling on a per user basis.

There is no support to give static IP addresses based on username.
This is more advanced than the needs of Cobalt users in general.  The
implementation of pptpd forces cleartext password to be stored in a
CHAP secret file for each user that needs a static IP address.  For
machines that did not have this secret file in place, when PPTP is
installed, users would have to re-enter passwords because Cobalt
machines do not store cleartext passwords.

A new namespace Pptp is introduced under the System class. Its
properties are:

enable ::= [true|false]
  True when PPTP is enabled.

localIpLow ::= IP Address
localIpHigh ::= IP Address
  This is the IP address range being set into /etc/pptpd.conf as
  localip. If the range has only 1 address, all PPTP clients share the
  same local IP address. Otherwise, the range must be big enough for
  each PPTP client to have an unique address.

remoteIpLow ::= IP Address
remoteIpHigh ::= IP Address
  This is the IP address range being set into /etc/pptpd.conf as
  remoteip. The range must be big enough for each PPTP client to have
  an unique address.

autoConfig ::= [true|false]
  When turned on, IP addresses are automatically configured.

A user interface is provided for users to enable/disable PPTP, enter a
local IP address for all PPTP clients and enter a range of remote IP
addresses.  There is no need to enter a range of local IP addresses on
the user interface, CCE handlers should be able to handle ranges for
possible future extensions, though.  To let PPTP clients access the
Intranet, local and remote IP addresses must fall within the network
connected to the primary interface.  Data validation must be in place
to check this.

To improve user friendliness, users can enable/disable auto-config
mode.  When enabled, users can only browse but not change IP address
settings.  They are also asked if it is alright to overwrite existing
settings.  To customize settings, users can disable auto-config.
Auto-config generates configuration for 32 PPTP clients.  33 IP
addresses on the network connected to the primary interface must be
available for 1 locale IP address and 32 remote IP addresses.  To find
the IP addresses, the IP address and network mask of the primary
interface is used to calculate the total pool.  DHCP configuration is
then examined to eliminate DHCP assignments from the pool.  From the
remaining pool, the largest continuous block is found. If the block
has more than 33 IP addresses, only the highest 33 IP addresses are
used.  If the block has less than 2 IP addresses, auto-config fails
and users are prompted.  Otherwise, the lowest IP address in the block
becomes the locale IP address and the remainings become remote IP

- Q: Why are there no encryption keys needed?
  A: PPTP uses PPP as the authentication algorithm, so it does not
     need keys like PGP or IPSec. PPTP only needs username and
     password for PAP or CHAP type of authentication.
- Q: Why not just hard-code local and remote IP addresses?
  A: It is difficult to decide what these IP addresses would be
     because IP addresses can be in-use already.


IPSec -- 4wks


IPSec stands for Internet Protocol Security.  It uses strong
encryption to provide authentication and encryption servers for VPN
applications.  IETF developed IPSec protocols and a large number of
security software/hardware vendors supports them.  IPSec can be
configured to build VPNs between networks as well as letting machines
without static IP addresses to login to a private network.

Required Software
- FreeS/WAN
- Linux kernel with IPSec support

The following is a VPN example:

  subnet a.b.c.0/24		(leftsubnet)
  interface a.b.c.d		(Qube's private interface)
    [Left gateway machine]	(The Qube)
  interface e.f.g.h		(left, aka Qube's public interface
  interface e.f.g.i		(leftnexthop, router's private interface)
  interface unknown
  interface unknown
  interface j.k.l.m			= rightnexthop
  interface j.k.l.n			= right
    [Right gateway machine]
  interface 192.168.0.something
  subnet			= rightsubnet

Given the sample network, the configration file /etc/ipsec.conf is
like the following:

# basic configuration
config setup

# VPN connection for head office and branch office
conn [name]
        # left security gateway (public-network address)
        # next hop to reach right
        # subnet behind left (omit if there is no subnet)
        # right s.g., subnet behind it, and next hop to reach left
        # right is masquerading

Pescadero is the left gateway on the picture.  Information about the
left network like subnet, interface IP addresses and nexthop can be
obtained from CCE, so users do not need to enter them on the user
interface.  On the other hand, information about the right subnet
needs to be filled in by users.

If the right subnet is not masqueraded, the rightfirewall line can be

There are two keying modes IPSec support.  They are automatic keying
and manual keying.  Manual keying requires manual distributing keys
which is unsafe.  Automatic keying is supported by key exchange
algorithms which is safer and easier to maintain.  Automatic keying
also create keys automatically.  IPSec on Cobalt machines should use
automatic keying.

Besides supporting network configuration with fixed IP addresses,
IPSec also supports road warrior mode.  A road warrior is any machine
that does not have a fixed IP address.  This includes:
- A traveller who might connect from anywhere.
- Any machine that has a dynamic IP address.  Indeed, nearly all
  dialup connections and most DSL or cable modem connections use
  dynamic IP addresses.  Most home machines connecting to the office
  are in this category.

Here is an example road warrior configuration:

# Connection for road warrior Fred 
conn [name]
        # left security gateway (public-network address)
        # next hop to reach right
        # subnet behind left (omit if there is no subnet)
        # accept any address for right
        # no subnet for a typical road warrior
        # it is possible, but usually not needed
        # let the road warrior start the connection
        # override the default retry for road warriors
        # we don't want to retry if IP connectivity is gone

There is a new namespace Ipsec under the System class in CCE to store
general IPSec parameters.  It has the following properties:

enable ::= [true|false]
  True when IPSec is enabled.

There is also a new CCE class IpsecNet.  This class stores
configuration for networks in IPSec.  The properties under this class

description ::= string
  A description of the connection.

ipaddr ::= IP Address
  The address of the IPSec gateway on the right network of the
  connection.  Not needed in road warrior mode.

nextHop ::= IP Address
  This IP address is next hop address of the right IPSec gateway.  Not
  needed in road warrior mode.

subnet ::= IP Address/Subnet mask
  This is the right subnet.  It can be blank if there is no subnet.

publicKey ::= string
  This key is only used to authenticate the IPSec gateway.  It is not
  used to actually encrypt data.

The user interface needs to provide means to enable or disable IPSec.
Users should also be able add IPSec networks and road warriors through
the user interface.



Dynamic Host Name  (PRD 3.1.5) -- 1wk

Pescadero should be able to modify the DNS records for it's DNS
host/domain name when it gets a new IP address via DHCP.  This
feature would only be possible when Pescadero is acting as it's
own primary DNS server.

A handler should also be added to update the Qube's DNS entry
(providing the qube maintains it's own primary DNS) when and if
the Qube gets a new IP address from a DHCP server.  The basic
method should be searching for DnsRecords where the IP is the same
as the Qube's old IP.  All of those records should then be updated
to reflect the new IP.

Levels of Admin  (PRD 3.1.9) -- 4wks

Pescadero should support varrying levels of admin access.  To do
this we need to add a field to the User object to contain each
user's UI rights.  We also need to work on palette to honor what
is says in those UI rights.  Admin should have a nice UI to grant
and revoke access to different sections of the UI. There should
also be a way for sections of the UI to make themselves known to

There should be a new section added to the add/modify a user
pages.  This section should contain a list of sections and some
way to signify that this user has admin access to sections x, y
and z but not q (a set-selector comes to mind).

The add and modify user handlers should be changed to put the UI
rights information into the apropraite field in the User object.

The code in the Palette should be changed to check the UI access
rights of a user before displaying the UI.

Whenever a new section is added to the UI, that section should
add it's name and an i18n tag to System.uiSections (see Schema
section below).

CCE will have to be changed to respect user's access
rights.  Specificly, users should be able to SET, CREATE and
DESTROY objects associated with sections they have access to.

a new field needs to be added to the System object:

uiSections (an array of all of the different sections in the UI,
i.e. Users, Groups, ActiveMonitor, etc.  The names of the
sections must also be i18n tags under the palette domain.)


Java  (PRD 3.1.14) -- 1wk

Adding support for Java to Pescadero is as easy as installing a few
pieces of software: JDK 1.3, the Apache Tomcat module, and any free
JDBC connectors for Interbase, MySQL and PostgreSQL.  There is no UI
needed for this feature.

Key Management  PRD (3.3.2) -- 3wks

Many services on Cobalt machines rely on public key encryption
algorithms to encrypt and authenticate. HTTPS, IMAPS, POP3S, BlueLinQ,
just to name a few. Key management is a crucial foundation to these

There are two popular key certification schemes - X.509 and PGP. Their
trust models are different. X.509 certificates are directory based.
PGP is referral based. The X.509 model puts more reponsibility of
trust management towards signing authorities. In the PGP model, users
are reponsible for managing trust.

Many services on Cobalt machines such as HTTPS and IMAPS are X.509
based because they have multiple users. Letting each of them having
their own trust network is very hard to manage. Many of these services
are based on SSL. Therefore, the design of the key management system
is geared towards X.509 certificates.

As a foundation of key management, both openssl and gpg are installed
on Cobalt machines. PGP is not installed because it does not come with
a free license.

All X.509 certificate keys are located under a certificate
repository. It is a basically a cache which can only hold a limited
number of keys. Disk consumption is restricted this way. The
implementation of such a repository is simple. All the keys are
located under the directory /usr/certificate/ and its
sub-directories. Sub-directories are for logical grouping purpose
only. Permanent signing authority certificates are located under the
"ca" sub-directory. The site-wide keys are located under <site name>
sub-directories. BlueLinQ signer certificates are located under the
"blueLinQ" sub-directory. To maintain the cache semantics, a cron job
checks the repository daily. All expired keys are removed because
futher use of them are considered as forgery. If the number of keys
exceed 256, then the non-permanent ones with the oldest file system
timestamp are removed.

A web user interface is provided to manage a site-wide key pair. Key
management for individual users is not supported. To tighten security,
this interface only works with SSL login. In non-SSL sessions, it
should notify users to use SSL instead. As a general design rule,
reliance on this user interface should be made minimized because
cryptography is hard to understand for general users. It must be kept
informative, minimal and simple. There is only one X.509
certificate/private key pair this interface can manage per site. Some
server has only one site. The pair is shared between services for the
site, so it is site-wide. Users can perform the following operations
through the interface:
- Generate and install a self-signed X.509 certificate/private key
- Generate a X.509 Certificate Signing Request (CSR)/private key pair
- Import a X.509 certificate/private key pair
- Export the X.509 certificate
- Export the private key

When generating a new self-signed key pair, if a non-self-signed
non-expired key pair already exist, users are prompted to see if they
really want to override the non-self-signed pair. Generated key pair
is directly saved to the repository.

During the generation of a CSR, users should get a CSR/private key
pair. The system do not save the pair. The existing key pair for the
site should still work. Users should pass the CSR onto signing
authorities and save the private key is a safe place. After signing,
users can then import the signed X.509 certificate and the private key
back to the system.

During the import operation, users are asked to see if they want to
overwrite the existing key pair. If they do not want to lose the key
pair, they should export and save the pair first.

Key exporting is supported through the user interface because users
can load the keys onto other machines.

The user interface does not allow removing keys because SSL would not
work and users would not be able to access this interface again.

Besides the certificate/private key pair managable by the user
interface, there is a set of X.509 certificates pre-installed and
always available on Cobalt machines from the following trusted
certificate authorities:
- Sun Cobalt
- GTE CyberTrust
- Thawte
- VeriSign
These certificates are used to verify signatures on other

Every Cobalt machine also has a self-signed X.509 certificate
generated during first boot. This certificate is necessary for SSL
based services to work. In other words, it is necessary for users to
get to the certificate management user interface.

Other important certificates such as those from SPs can be imported or
removed using the cookie cutter. This makes sure required certificates
are available before machine use.


I18n/L10n Automation  (PRD 3.1.2) -- 1wk

Localizing Cobalt machines into multiple languages is a complex task.
Automated tools can help speed up both i18n and l10n projects

These are some problems that are easy to catch using automated tools:
- Display problems related to strings with trouble characters (e.g.
- Display problems with multi-byte encoded characters.
- Different string sizes in different locales give different looks.
- Mismatch messages between different locales.
- Malformatted message definitions.

Display problems can be caught using generated strings.  The idea is
to make a dummy test locale with generated test strings in it.  The
dummy locale uses Shift-JIS as encoding because this encoding includes
ASCII "\" in its range.  A Sausalito module make rule "test_locale" is
created.  When executed, it creates a new directory for the "zz"
locale under the locale directory.  It then scans the "en" locale
directory for .po files.  The .po files are basically copied to the
"zz" directory, except they are modified.  All strings in the .po
files have their characters counted and half that number of Kanji
characters are appended to the end.  This is to increase string size
to test the flexibility of user interface layout.  Appended Kanji
characters are randomly generated.  Shift-JIS Kanji characters has 2
bytes with the first byte between 0x81 to 0x9F or 0xE0 and 0xEF and
second byte between 0x40 to 0x7E or 0x80 to 0xFC.  After appending
randomly generated Kanji characters, characters 0x27 0x5C 0x8F 0x5C
are appended to the end of the strings.  These are single-quote (ASCII
0x27), backslash (ASCII 0x5C) and a Kanji (0x8F 0x5C) that means ten.
This is to test the impact of backslashes and single-quotes to the
strings.  After the .po files are created under the "zz" directory,
the corresponding .mo files are created under the working directory
(i.e. /usr/share/locale).  To see test results, restart admserv to
clear string cache and login to the user interface with a user-defined
"zz" locale set into the browser.

To inspect the integrity of .po files, a Sausalito module make rule
"inspect_locale" is created.  Besides running as a independent rule,
the "rpm" rule also depends on it, so it gets run everytime RPM is
built.  Its basic function is to see if all locales has the same .po
files, each .po file has the same strings and they are not
malformatted.  It uses the "en" locale as the basis for comparison and
inspect every other locale directories for mismatches.  Warning
messages are printed if the number or the names of .po files do not
match.  Inside each .po file, the number and names of message IDs must
match. Otherwise, warnings are printed.  To see if .po files are
malformatted, all .po files are inspected to see if there are msgid
lines without the corresponding msgstr lines.  Note that there can be
many msgstr lines per msgid line.  When tests fail, "inspect_locale"
rule fails and other make rules that depend on it should stop.

Documentation Framework  (PRD 2.1.5) -- 3wks

Pescadero should add support for online documentation.  The work
done for Carmel ML can be used as a starting point, but should be
expanded to support not only the download of PDF manuals, but HTML
format help files.  These HTML files should be able to be called
in such a way as to display the most aproprate section to the
user, depending on where they are in the UI when they click the
help icon.

This feature is highly dependent on support from the Systems group for 
implementing HTML documentation.

There are two places the documentation UI shows up.  The first is a
list of all documentation avaliable on the system in all languages.
The second is a pop-up window that appears when the user clicks on the
documentation icon.  This window will display either HTML help
appropriate for the context in which the user clicked the icon or, if
there is no appropriate HTML help, a list of documentation.

The list of documentation should display the name of the
documentation, a short description, the size and there should be a
link or button to view/download the documentation.

The pop-up window should be as small as possible while still
being usable.  If there is no HTML help avaliable in the current
context it should display a list of avaliable documentation, as
specified above, along with text explaining that there is no
context-specific help avaliable for that section of the UI.  To 
find context sensitive help a CCE search is run looking for Doc
objects (see Schema below) with the context field set to whatever
the current context is. If there is more than one result the user
should be shown a list of all results.

Handlers, etc.
All documentation will be installed in 
/usr/sausalito/ui/web/base/documentation regardless of 
format.  Upon installation, a CCE Doc object should be created 
(see Schema for format).  Doc objects will be used by the system
to determine what documentation it has installed and store
information about that documentation.

The context can be any string.  To save work, the i18n domain may
be used as the context, but that may not allow enough
granularity. Duplicates are allowed, but unless they are
absolutely necessary they should be discouraged.

The Page object will need to be modified to set the page
context. The constructor should take one additional parameter, the
context, and appropriate accessor and mutator methods should be
added.  The page object will then set a global javascript
variable, which contains the context, in the toHeaderHtml(). When
the documentation icon is clicked a new window will open.  It's
URL will contain the contents of that variable as part of the
query string.

The Doc object will have the following fields:

nameTag (i18n tag for name displayed to user)
locale (what language this documentation is in)
context (UI context that this documentation documents.  Only valid
for HTML docs)
webLocation (either a relative link to 
/base/documentation/whatever or an absolute link to a different
systemLocation (location of files on the system, if any)

Database Configuration  (PRD 3.2.9) -- 1wk

Many applications rely on databases to work. Even though MySQL,
PostgreSQL and Interbase are installed on Cobalt machines, there are
no common ways to enable/disable them. A common interface for
developers to enable/disable databases is necessary.

For each of the databases included on Pescadero there will be a
Namespsace under the System object.  Each namespace will contain at
least the following fields:

name: Name of the database (MySQL, Postgress, etc.)
enabled: boolean true or false

But may contain other fields as needed. When the enabled field is
changed it will trigger a handler that will either stop or start the
database depending on the new state of the enabled field.  The handler
will check the return value of the executed command and send CCE a
signal (SUCCESS or FAIL) based on that return value.  There is no UI
associated with this feature.


LDAP Import  (PRD 3.1.8) -- 4wks

System administrators often use ldap to maintain large quantities of
users.  We should allow system administrators to be able to keep track
of users by simply changing ldap settings and have a Pescadero unit
update itself accordingly.

For example, when a new user or group of objectclass PosixAccount or
PosixGroup is added to a branch of the ldap directory, our unit should
realise this upon the next update and attempt to create the account
locally.  If the user or group cannot be created, an email
notification should be sent to the administrator of the unit and
optionally to the administrator of the ldap directory or help-desk.

If the user or group is properly created, they will remain as active
accounts on the unit as long as they continue to exist in the ldap
directory.  If a user or group whose data originated from the ldap
directory is removed from the ldap directory, it's account shall be
suspended and again notification sent to the appropriate

If the user or group's data changes in the ldap directory during his
stay, this data will be attempted to be updated on the unit and any
errors also sent to the appropriate administrators.

The new UI for the ldap imports will stay the same as our current
implementation but it will also allow the administrator to choose a
update period and allow him to disable continuous updates if so

Frontpage Extensions (FPX)  (PRD 3.1.10) -- 2wks

Frontpage extensions allow users to publish directly to the server via
a user-friendly proprietary client application.  Users will be able to
edit personal web site, and their member group web sites, provided
that each group is tied to a virtual site.

The Apache module Improved mod_frontpage will be updated to the most
current match to the Apache version used.  This module must be
modified to accept Qube file permissions and directory structures.
Enabling a Frontpage user or group Web will require the end user to
assign a password to access that web through the FrontPage client.
Once enabled, passwords and authentication is managed by the Frontpage
client only.

The four pages for adding and modifying users and groups will be
modified to include Frontpage controls.  These controls are unchanged
from the single instance control in the Qube 3.

Windows 2000 Login and Domain Support -- 2wks

With the release of Windows 2000 changes have been made in login and
domain authentication.  These changes have not yet been reflected in
the main samba branch and release 2.2 which intends to tackle these
new technologies has long been in a pre-alpha stage.  For this reason,
we should opt to use a fork of samba called 'samba-tng' (the next
generation).  This fork in the samba code occurred last fall in which
several of the founding members of samba have rewritten samba rpc
using a new architecture.  TNG works very well as a PDC on windows
networks, but is lacking in file and print sharing capabilities.  For
this reason, it is suggested that a dual headed environment be set up
in which both the stable samba 2.0.7 and the latest version of
samba-tng be set up to work side by side on pescadero units.

In order to have two copies of samba working concurrently on the same
machine, multiple IPs will need to be bound to a same NIC.  In order
to accomplish this, ip aliasing will need to compiled into the linux
kernel.  The code for tng can be located from cvs at

Rpms will need to be created for tng, and rpms for samba may be
recycled from another vendor.  These rpms should install tng in
/usr/samba-tng and install the head branch of samba into
/usr/samba-head.  The smb.conf file for samba is by default located in
${PREFIX}/lib/smb.conf.  This will allow us to maintain two seperate
configuration files.

A sample layout of the tng smb.conf file:

        bind interfaces only = true
        interfaces = ${ALIASEDIP}
        netbios name = ${DCHOSTNAME}
        workgroup = ${WORKGROUPNAME} 
        security = user
        domain logons = yes
        encrypt passwords = yes
        logon home = \\${HEADMACHINENAME}\%U
        logon path = \\${HEADMACHINENAME}\%U\Profile

A sample layout of the head smb.conf file:

# sample HEAD's smb.conf
	bind interfaces only = true
        interfaces = ${MAINIP}
        netbios name = ${HOSTNAME}
        workgroup = ${WORKGROUPNAME}
        security = domain
        domain logons = no
        encrypt passwords = yes
        password server = ${DCHOSTNAME}
        os level = 20
        domain master = no
        preferred master = no
        local master = no

	comment = Home Directories
	browseable = True
	writeable = yes
	create mask = 0700
	directory mask = 0700


By running both these services, the unit may be able to both
authenticate as a pdc and be able to share file and printer resources.
Notice that if the unit is selected by the administrator to not run as
a pdc, then only the head release of samba will be selected to run.

A domain controller must also know the names of all the machines that
will be joining this domain.  An interface must be written in which
the admin may administer a list of machine names that the dc
recognizes.  Whenever a new machine is added, a unix account should be
created by capitalizing the machine-name and appending a '$' to it.
This unix account need not have a home directory nor should it have a
shell.  This user should then be added by issuing the command
'createuser MACHINE-NAME$' in a shell from /usr/samba-tng/bin/samedit.
It is also recommended that this machine be added to a unix group,
possibly called winmachines or something similar.  Machine names can
be stored within CCE in a packed array of the object containing the
window dc settings.

Whenever a user is created within CCE, a handler should add this user
to the sam db by issuing the command 'createuser ${username} -p
${password}' to samedit.  Passwords should be updated whenever changed
in CCE, and the user should be removed from the collection when
removed from CCE.

By setting the login path and login home, a user's home directory will
auto-magically mount itself upon authentication, and a windows profile
will be stored in his/her ${homedir}/Profiles directory.  This allows
our clients to easily set up roaming capabilities within their windows

A couple options will be necessary for the unit to be enabled as a
domain controller.  These options are listed below.
- An IP for the domain controlling service to bind to.
- A host/netbios name for the domain controller to attach to.

Not Supported
Login scripts will NOT be supported through any interface whatsoever
as they can get complicated when being refered by user / computer /
group or any combination thereof.

Domain Registration  (PRD 3.3.1) -- 3wks engr plus ?wks IT infrastructure...

Having an Internet domain is the first step to establish a web
presence.  Web, email and many other services require a domain to
function well.  However, domain registration is not a trivial task and
requires considerable technical knowledge.  The objective for Cobalt
is to help users to obtain Internet domains painlessly.

There are two solutions to the problem.

First, DNS servers for Cobalt registered high level domains
(e.g. biz.com) can be setup by Cobalt.  A user interface cane be
provided to register sub-domains under the high level domains.  After
registration, Cobalt machines become primary DNS servers for the
registered sub-domains and the high level domain servers are
configured to point to them.  The domain settings of Cobalt machines
can be updated as well.  The whole setup can be very quick.  There are
cons against this solution, however.  There are no easy ways to ensure
only Cobalt machines can register to Cobalt hosted DNS servers and
this leaves the servers vunerable to various types of attacks.  Also,
users cannot register higher level domains (e.g. their-name.com).

Second, the user interface can refer users to the domain registration
web sites setup by SPs or registrars.  Depending on the SP or
registrar, domain registration can take more than a day.  However,
this solution is easier technically and users can register various
types of domains the way they want it.

Cobalt should partner with SPs or registrars to offer the second

Virtual Sites  (PRD 3.1.7) -- 2wks

Pescadero will have virtual site support.  In order to
difrentiate Pescadero from RaQ products, Pescadero will only
support virtual websites.  The virtual websites will be linked to
the creation of workgroups.  The code for this feature can be
taken, in large part, from Point Lobos.  The problem is getting
rid of all the extra functionality.  The Point Lobos Vsite code
operates with one Vsite object per site and then seperate objects
for each service for each site (VirtualHost for HTTP, FtpSite for
FTP, etc.).  We can just take the VirtualHost object and leave the
others out.

There should be a new tab on the "Add a new Group" page called
"Virtual Site" or something similar.  When the admin clicks on
this tab he should see all the fields needed to configure a
virtual website.  These fields are:

fqdn  (fully qualified domain name of virtual site)
ip address (IP address of virtual site)
server admin (email address of virtual site's admin)

IP address should be restricted to only the addresses of eth0 or eth1.

The handlers are rather simple, all they need to do is reweite
the httpd.conf and restart apache whenever a VirtualHost object is
changed.  The VirtualHost block in httpd.conf for any one site
should look something like this:

NameVirtualHost ipAddr
<VirtualHost ipAddr>
ServerName fqdn
ServerAdmin serverAdmin
DocumentRoot siteRoot/web
ErrorDocument 401 /error/401-authorization.html
ErrorDocument 403 /error/403-forbidden.html
ErrorDocument 404 /error/404-file-not-found.html
ErrorDocument 500 /error/500-internal-server-error.html
RewriteEngine on
RewriteCond %{HTTP_HOST}                !^ipAddr(:80)?$
RewriteCond %{HTTP_HOST}                !^fqdn(:80)?$
RewriteRule ^/(.*)                      http://fqdn/$1 [L,R]
RewriteOptions inherit
AliasMatch ^/~([^/]+)(/(.*))? siteRoot/users/$1/web/$3
# end VirtualHost object information

Where ipAddr, fqdn, serverAdmin, and siteRoot are all variables
plugged in by the handler.

There is one new object here, the VirtualHost object.  It has the
following attributes:

ipAddr  (ip address of the virtual host)
fqdn    (fully qualified domain name of the virtual host)
serverAdmin  (email address of the server admin)
siteRoot (path to the site root (i.e. /home/groups/some_group))

File Manager  (PRD 3.2.4) -- 4wks

Cobalt machines need to be more attractive data repositories. Users
should store and retrive data from it more intuitively. They should be
more dependent on Cobalt machines. Qubes has traditionally been armed
with file sharing features, but they are not obvious enough for users.
A PC magazine article described it hard to post web pages to a Qube 3.
It would not be so if they know file sharing exists.

File manager is a program sitting under the program section on the
server desktop. It is for any user on Cobalt machines to manage their
files. File manager integrates nicely with windows file sharing and
apple file sharing. It also provide a web user interface to manage
files. Native file sharing mechanisms are easier to use than web-based
file manager. However, web-based file manager is always available on
different platforms.

On IE, windows file sharing can be launched using "\\<ip>" and apple
file sharing using "afp://<ip>/". It would be good to make native file
sharing more accessible and usable by launching these
references. However, because of the complex network environment, these
references may not work all the time even with substantial detection,
so they are not used by the web-based file manager.

The web-based file manager allows users to manage files that they have
access to. This includes files under their home directory as well as
the home directories of groups they belong to. File manager is
positioned to be a personal file management utility instead of a
system file management utility, so files under system directories like
/var and /usr are not accessible, even for admin.

Browse, upload, download, copy, move, rename and delete are supported
by web-based file manager. To simplify things, there is no notion of
hard-links and symbolic links.

Browsing is done similar to other file managers. Users can go up to
parent directories and into sub-directories. Each file is represented
as an entry showing its name with extension, size and last
modification time. Files with the following extensions are viewable:
Web - asp, htm, html, jhtml, jsp, php, php3
Image - bmp, gif, jpeg, jpg, png
Audio - mp3, wav
Video - mpeg, mpg
Document - doc, pdf, ppt, txt, xls

Internationalization support is non-trivial. File names can be encoded
in different encodings in the file system than the encoding used to
enter or display them. Encoding conversion is necessary to make sure
web-based file manager uses the same encoding as windows and apple
file sharing. Currently, windows and apple file sharing rely on
locale-specific codepages to encode file names. If one user uses a
certain locale to store a file and another user who uses another
locale trying to read it, problem can happen because their codepages
are different. A better solution is to have all file sharing
mechanisms encode file names in UTF-8.

Because of the limited flexibility of web-based user interface, copy
and move are supported through clipboard-like concept that allows
users to copy, cut and paste files. There is always a clipboard
section on the user interface to make the concept explicit. Clipboard
is implemented under /tmp/clipboard where periodic cleanup
happens. When a copy or cut operation occurs, the following steps
- If /tmp/clipboard/<CCE session ID> does not exist, create it. This
  is the clipboard directory.
- Delete all files under this directory.
- For each file to copy or cut, create a symbolic link to it. The
  links are named "1", "2", "3", ... to guarantee unique names.
  Symbolic links are used because they take minimal space.
- A new file called "operation" is created under the clipboard
  directory. It contains a single string "copy" or "cut" depending on
  the operation.
When a paste operation occurs, the following steps happen:
- Inspect the "operation" file under the clipboard directory to find
  out what operation it is.
- Each each symbolic link under the clipboard directory, copy the file
  it links to to the target directory if it is copy operation.  If it
  is cut operation, move the file.
- Delete the whole clipboard directory.

Because the clipboard ID is based on CCE session ID, a user with
multiple web-based file manager launched will see they all use the
same clipboard. It is, therefore, a good idea to design the user
interface to not stimulate spawning of multiple copies of itself. An
alternative is to use a launch-based ID. However, such ID cannot be
easily stored as a cookie because of namespace conflict among browser
windows of the same browser process. This makes it very troublesome to
carry the ID around. Hence, using CCE session ID is preferred.

At the basic usage level, there is no explicit notion of file
permission on the web-based file manager user interface, just like
native file sharing. This is to make file management mechanisms
consistent with each other as well as making them as simple and easy
to use as possible. Executable permission is handled automatically
based on file extensions. Files with extensions .cgi and .pl are
considered executable because target usage is web posting. File
read/write permissions always follow the schemes of the directories
they are in. For example, files would be world readable if they are
under the web directory. They are not world accessible under private
directories. To share files, users can move files to public
directories or group directories. File ownership is automatically
handled as well. Similar to read/write permissions, user and group
owners always follow schemes of the directories the files are
under. Upload, copy, move and rename operations must be careful enough
to keep permissions and ownerships correct.

At the advanced usage level, there is a need to change file
permissions. For example, a user would want to make a .cgi uploaded
from Windows file sharing executable. A user interface for advanced
users is provided to handle these cases.

Because there can be thousands of files a user have access to,
scalability issue on the user interface must be carefully considered.
The user interface must be scalable to at least 1000 files.

Dynamic DNS  (PRD 3.1.5) -- 2wks

Dynamic DNS is an add-on to the current DNS and DHCP capabilities of
Cobalt machines.  Effectively, it binds a DNS hostname to a MAC
address thereby allowing Qubes that receive their IP addresses from
service provider DHCP servers to have static hostnames.  Qubes can
also use the same mechanism to assign static hostnames to machines
configured by them through DHCP.  Dynamic DNS is accomplished by
adding or removing DNS entries whenever a DHCP lease is obtained or
expires.  In order for the change in DNS names to be propagated
throughout the system, the Time-To-Live (TTL) of the domain with
dynamic hostnames must be low, but not so low as to have data
transfers be too frequent.

The Dynamic DNS UI will be an add-on to the current DNS UI.
Specifically it will consist of a new record type, DYNDNS, which will
only be available if DHCP has been previously enabled. There will also
be a screen which lets the administrator enter information needed for
the Dynamic DNS record. That screen will be in the style of the other,
similar, screens used to collect information for the other record
types.  The DYNDNS screen will collect the intended hostname and the
MAC address of the machine it is being assigned to.  Dynamic DNS
records will show up along with all the other records in a domain.
When one is in use, that is, when the machine to which it has been
assigned is on, the IP address of the machine will show up in the
"Response" column.  When a machine is not on, and therefore does not
have an IP assigned, the "Response" column will display "N/A",
"" or some other symbol showing that the hostname in question
does not currently have an IP assigned.

Each DYNDNS record will be represented in CCE with a DnsRecord object.
To represent a Dynamic DNS record the DnsRecord object needs a "mac"
field added.  The "mac" field will only be defined for Dynamic records
and will contain the MAC address of the machine to which the hostname
in that record is assigned.  The domain to which the Dynamic records
belong will be represented by a normal DnsSOA object.

Handlers & Other Backend Work
The current DNS handlers will need to be modified to recognize a
DnsRecord which has the "mac" field defined as a Dynamic
record. Additionally, whenever the DHCP server grants a lease to a
machine with a Dynamic DNS hostname, or when that lease expires, the
DnsRecord object(s) in CCE must be changed to reflect the new
resolution of that hostname and the DNS configuration files must be
flushed to disk.  This communication should be accomplished via
script(s) running as a CRON job which scan the dhcpd.leases file
and push the details of any changes into CCE.

Active Monitor  (PRD 3.2.11) -- 1wk

Active Monitor on Pescadero should have two additional
features: configurable error emails and a configurable system
check interval.  The configurable error message should only allow
the first part of the error email to be edited (see the current
template for a better idea of what this means).

The UI for the system check interval is just a text box or a drop
down list on the current AM configuration page.  The interval data
should be limited to a range between 10 minutes and 2 hours.  The
configurable error message can also be represented as a large text
box on the AM configuration screen. There needs to be a "Return to
default" button in the UI as well which will erase the error 
message in CCE.

Handlers, etc.
When a new error message is set a handler needs to put that error
message in the errorMessage field of the ActiveMonitor
object.  When swatch needs to send out an error email it should
first check that field, and if that field is empty swatch should
get it's default error message via gettext.

the ActiveMonitor object will need two new fields:

interval (the number of minutes between checks)
errorMessage (the error message)


From the point of view of our software a wireless device can be
treated as just one more Ethernet connection, after it has
completed it's data-link level connection.  Before that connection
is completed, however, the user must supply several pieces of
data: the WLAN mode (ad-hoc, Infrastructure, Infrastructure AP),
any mode-dependent information, the kind of channel, if the
network is encrypted and how, etc.  The exact set of data needed
will depend on the card and driver used.  Most drivers let the
kernel treat the wireless card as one more Ethernet device by
converting the 802.11 frames into Ethernet frames.  This is also
dependent on the driver; Ethernet conversion is not part of the
802.11 standard.

All the places in the UI that deal with IP settings and transfer
statistics will need to have the wireless interface added.  A new
page will need to be added under the System section.  It should
contain fields for the user to supply the necessary data for the
wireless card to make it's data-link level connection.  What,
exactly, that data will be will not be known until we know what
card and driver we will use.

All handlers dealing with the current Ethernet devices or IP
settings (DHCP handlers, handlers dealing with route and ifconfig,
etc.) should be modified to take the new device into account.  A
new handler will have to be written to match the new page in the
UI.  Each time new settings are entered into that page the driver
for the wireless card will need to be removed and reloaded with
the new parameters and /etc/modules.conf will need to have the
same changes made to it.  The exact syntax of the parameters is,
again, driver dependent.

There will need to be a new CCE object called
WirelessSettings.  It will contain any data necessary to 
initialize the wireless card.  This data will only be known after
a card has been selected.

DHCP Auto-config  (PRD 3.1.5) -- 1.5wks

DHCP server is not an easy to understand feature. To configure it
correctly, indepth networking knowledge is required. Network
auto-configuration sets up DHCP server automatically. However,
subsequent changes in network settings can easily invalidate that
setting. A separate auto-configuration for DHCP is necessary.

If Cobalt machines are being setup by auto-configuring network, DHCP
servers are in auto-config mode. Under this mode, DHCP servers are
being configured automatically based on other settings on the
machine. Because configuration is done under strict rules and dealing
with customization is non-trivial, users cannot change DHCP settings
directly under this mode. All DHCP parameters becomes read-only
displays. Widgets to add, modify or remove DHCP assignments are being
disabled. There is text on parameter modification pages to tell users
modification is not allowed under this mode. Disabling auto-config
mode does not erase or change DHCP settings already generated, so
users can disable this mode and go into user interface pages to
customize them. When enabling auto-config mode from the user
interface, all DHCP settings are being replaced by freshly generated
ones.  This tremendously simplifies implementation because original
settings are disregarded. However, users must be prompted and asked if
it is alright to lose old settings.

The configuration behaviours under auto-config mode are as follows:
- DHCP server always give its domain name to clients as their domain
- Client DNS setting is always the IP address of the primary
- Maximum lease time is always 86400 seconds.
- For the primary interface, client netmask is always the netmask of
  the primary interface.
- For the primary interface, client gateway is always the IP address
  of the primary interface.
- For the primary interface, there are always 1 or 2 pools of
  dynamically assigned IP addresses. This pool(s) includes all IP
  addresses within the network connected to the primary interface
  except the one used by the interface, the broadcast address and the
  lowest 48 IP addresses on the network. If the IP address of the
  primary interface separates the IP addresses into 2 ranges are 2
  pools. Otherwise, there is 1. The 48 IP addresses are reserved for
  future use and other assignments. If there are less than 48 IP
  addresses on the network, there are no dynamically assigned IP
- There are no static IP address assignments.


AUTO-CONFIG  (PRD 3.1.5) -- 1.5wks

DNS is a crucial service in many networks. However, without indepth
understanding of what DNS is, users can have a hard time configuring
it to work. Network auto-configuration does setup DNS, but further
maintenance still relies on users.  DNS auto-configuration can help
solve the problem.

DNS auto-config is a mode that users can turn on or off. It is enabled
if the system is configured though network auto-config. Otherwise, it
is disabled. This is based on the assumption that if users chose not
to use network auto-config, they are likely to be deploying Cobalt
machines into existing infrastructure.

Users cannot modify DNS domain or record settings under DNS
auto-config mode because this can introduce conflicts and conflict
management is not a easy problem. They can browse settings generated
by auto-config, so inputs are changed to display-only and widgets for
setting modification are disabled. There is text on the user interface
to tell users they cannot change settings under auto-config mode. When
switching DNS auto-config from enabled to disabled, all the domain and
record settings still remains, so users can customize them. Then
switching DNS auto-config from disabled to enabled, users are prompted
such that they know old settings will be replaced by newly generated
ones in the process. If they choose OK, auto-config mode is enabled
and settings are generated.

The behaviours under auto-config mode are:
- Default SOA administrator email is blank, refresh interval is 10800
  seconds, retry interval is 3600 seconds, expire interval is 604800
  seconds, time-to-live is 86400 seconds.
- There are no forwarding servers.
- There are no zone transfer access.
- Zone transfer format is RFC2317.
- If secondary interface has IP address setup, www.<domain> points to
  the IP address of the secondary interface. Otherwise, it points to
  the IP address of the primary interface. <domain> is the domain name
  of the Cobalt machine.
- <host>.<domain> points to the IP address of the primary
  interface. <host> is the host name of the Cobalt machine.
- If SMTP, POP or IMAP is enabled, mail.<domain> is an alias to
- If SMTP is enabled, smtp.<domain> is an alias to <host>.<domain>.
- If POP is enabled, pop.<domain> is an alias to <host>.<domain>.
- If IMAP is enabled, imap.<domain> is an alias to <host>.<domain>.
- If SMTP is enabled, smtp.<domain> is set to be a high priority mail
  server for <domain>.


RFC 2052 defines an experimental DNS RR which specifies the location
of the server(s) for a specifix protocol and domain (like a more
general form of MX).  Micro$oft has taken initiative to use these
records for services such as Active Directory.

The SRV RR allows administrators to use several servers for a single
domain, to move services from host to host with little fuss, and to
designate some hosts as primary servers for a service and others as

Creation and manipulation of SRV records, similar to that of MX
records.  Fields that the user must specify when creating or modifying
an SRV record are:
- The symbolic name of the desired service, as defined in Assignmed
  Numbers or locally. (case insensitive)
- The protocol which this service uses (tcp or udp)
- The domain the RR is refering to.
- A priority similar to that used in MX records, lowest priority rules
- A weight to be used when pseudorandomly choosing between records of
  the same priority
- The port to be used when contacting the target machine.
- The hostname of the machine that the service is located.

When records are created in our zone files, two records for each entry
must be made.  The reason for this is that MS has gone and introduced
the underscore as a part of their SRV records, and this is the way IE
assumes them to be found.

telnet.tcp	SRV 0 1 23 old-slow-box.asdf.com.
		SRV 0 3 23 new-fast-box.asdf.com.

_telnet._tcp	SRV 0 1 23 old-slow-box.asdf.com.
		SRV 0 3 23 new-fast-box.asdf.com.

RFC2052 for an example zone file.
http://www.nominum.com/resources/faqs/bind-faq.html#w2k for directions
on allowing underscores in bind.

LDAP Directory  (PRD 3.1.8)

*** We believe we still do not have a full understanding of the real
customer needs/requirements for LDAP...

The ldap directory currently only allows for a read-only access to
user and group information.  By using the back-perl back-end for
we may expand our ldap directory to a read and write state by
fully binding ourselves to CCE.  By doing this in perl, we give
ourselves a faster turn around time for improvements and

Classes that will be exported in the directory will be:
    ObjectClasses: Top, Person, Account, PosixAccount, CobaltAccount
    ObjectClasses: Top, PosixAccount, GroupAccount, GroupOfNames

The format of these object will stay as close as possible to that of
ldap results from Carmel.

Raw access to manipulating CCE Objects can be implemented, and be
configured by default off with no UI access.  This would be simple to
implement and may prove useful later.

This interface would export raw classes such that the distinctive name
would be in the format:

dn: oid=<oid>, o=cce, <basedn>
  <basedn> is the base distinctive name as set by the administrator
  <oid> is the object id of the object

The distinctive name used in making additions would be in the format:

dn: class=<classname>, o=cce, <basedn>
  <classname> is the name of the class they are creating.

This would allow the user to define the class which he is creating.

In this raw export of the CCE database, namespaces will be referenced
by using a period ('.') as the seperator between namespace and
fieldname.  This ldap directory will also return the same error codes
and messages as returned by cce.  Access will be granted as is done in
CCE, relying on it for authentication.

The UI of the ldap directory will stay the same as before.

Linux LDAP Center:

NNTP  (PRD 3.2.3) -- 4wks

A news server should be setup to allow maillist entries to be
accessible over NNTP.  Aliases for groups and mailling lists will also
be setup to properly send messages to this NNTP server.  Cyrus IMAP
can also house any news listings in public dropboxes for imap viewing.
Doing this would allow for an easier listing of mailling list archives
within webmail.  Many ISPs such as @home do network sweeps on their
clients to check if they are running NNTP servers.  The reason for
this is not only to limit the massive traffic associated with news
servers, but the fact that spammers and malicious user can use these
news servers to bounce messages off them.  For this reason, the
newsserver should be setup such that it is by default firewalled off
from the internet connection.

Cookie Cutter  (PRD 3.1.15, 3.1.16) -- 3wks

Cookie cutter (CC) is a system designed to enable SPs to massively
install, uninstall and configure software on multiple Cobalt machines
when they are deployed at customer sites. It is like cutting a lot of
cookies using the same cookie cutter so that they appear the
same. This allows SPs to bring all Cobalt machines they deployed up to
a certain specificaton. SPs can, therefore, add value to their
customers by providing SP-specific features.

There are several considerations that govern the design of CC:
- CC does not allow SPs to perform customization that is not
  compatible with Cobalt updates. In other words, Cobalt updates
  should not overwrite SP customizations.
- CC supports installation and uninstallation of PKGs which is the
  standard software package format for Cobalt machines.
- CC only works during first boot when users have not yet had a chance
  to customize their machines. This makes sure SPs can impose access
  control on features before users try to use them. Also, this
  simplifies engineering effort tremendously because Cobalt machines
  are still relatively homogenous.
- CC supports key installation. This is crucial to network security
  later on.
- CC supports popular network configuration assignment methods used by
  broadband SPs. It supports static IP address and dynamic assignment
  through DHCP and PPPoE.
- CC is a client/server system. Servers are located at SPs'
  network. Cobalt machines are clients.

Throughout this text, we will refer CC servers as cookie cutters and
CC clients as cookie dough.

Cookie dough works in 3 stages - cookie dough, cookie cut and
clean-up. Each of them have sub-phrases.

For debugging purposes, the whole process is logged in a log file on
cookie dough.

During the first boot and first boot only, the CC process begins. It
occurs before the LCD-based configuration.

The CC process begin with the cookie dough stage. During this stage,
cookie dough attempt to discover cookie cutters. On the Qube,
discovery is done on the secondary interfaces. However, it is
recommended to make discovery interface(s) configurable for future
extensibility. Cookie cutters are implemented as PPPoE access
concentrators (see RFC2516). The reason is that cookie dough do not
have IP addresses when they talk to cookie cutters. An alternative is
to use DHCP, but a lot of SPs already have DHCP servers on their
network. Multiple PPPoE access concentrators can be differentiated by
names and services. PPPoE servers on cookie cutters provide
"cookieCutter" service. To discover cookie cutters, cookie dough
starts the program pppoe to connect to PPPoE access concentrators with
"cookieCutter" service. PPPoE Active Discovery Offer packets should
respond with this same service name, and use the vendor-specific tag
of the frame to list the BlueLinQ server URL of the cookie cutter
process. This URL can only be relative because the cookie cutter is
the only machine the cookie dough can take to. Private Enterprise Code
5548 should be used to identify this as Sun Cobalt specific. Because
cookie dough do not have username and password setup initially, access
concentrators do not require authentication. If no cookie cutters can
be found, cookie doughs wait for 15 seconds and try to discover a
cookie cutter again. This implies machines not to be distributed
through SPs cannot have cookie cutting software installed, but this is
necessary to ensure machines are up to SPs specification, even in the
case where network cables are not plugged in during first boot. If the
connection was successful, cookie dough enters the cookie cut stage.

There are multiple steps in the cookie cut stage. This is the stage
when customization work occurs. Atomicity needs to be carefully
considered in this stage.  If operation aborted in the middle due to
unexpected conditions like power failure, operation should continue
seamlessly when it resumes. Each cookie dough keeps the current phrase
in a file. When a phrase finishes, the current phrase will change to a
new phrase. When the system fails in the middle of a phrase, the
system will start over the current phrase.

The first phrase in the cookie cut stage is to download the BlueLinQ
package list from the BlueLinQ server to cookie dough. This is done
like a normal BlueLinQ query through SSL web download. This is
important to prevent tampering.

After the package list download phrase, the next phrase in the cookie
cut stage is to download PKG files. All the specified PKG in the
package list are being inspected. The system only download the ones
that are for the product and not already installed. PKGs that does not
meet the critiria are silently ignored and not downloaded. The next
phrase is to install PKGs. The package list is read again from top to
bottom. If a PKG is in the download directory, it will be
installed. After the installation, the PKG is removed instantly to
prevent installation again if this phrase should repeat after
disruption. After installation, reboot phrase occurs and the system

The last stage is clean-up. Within this stage, package list and
temporary files are removed. Cookie cutting is disabled for future

Security needs to be tightened. On cookie cutters, PPPoE access
concentrators must not route any traffic out of the PPP interface. If
there are more than one PPPoE access concentrators on the network that
provide "cookieCutter" service, cookie dough must not perform any
cookie cutting.



In order to have 3rd party developers to develop software for Cobalt
machines, there needs to be support for software licensing. Most
developers do not develop software for free. Many software packages,
therefore, have places to input keys before they can be installed or
activated. It makes sense to have support for licensing in BlueLinQ so
that developers can leverage this feature instead of building their
own independently. The objective is to provide tools for developers to
help them solve licensing issues easier instead of offering complete
solutions. A bullet-proof software licensing protection is out of
scope. Software should check the validity of licenses themselves.

Free, try, rent, lease and buy are the target licensing schemes. Their
properties are as follows:

| Scheme | Cost | Duration  |
| Free   | 0    | Unlimited |
| Try    | 0    | Limited   |
| Rent   | >0   | Limited   |
| Buy    | >0   | Unlimited |

Licenses are implemented as XML files. They contain information about
what software it is for, when should it expire and who issued it. They
look like:

  issuer="Sun Microsystems"
  effective="12 Feb 2001 12:00:00 -0800"
  expire="12 Mar 2001 12:00:00 -0800"

  <displayVendor locale="en" value="Sun Cobalt"/>
  <displayName locale="en" value="WebMail"/>
  <displayVersion locale="en" value="2.0 Beta"/>
  <renew locale="en" value="Visit www.cobalt.com to renew license."/>

Only id, vendor, name, version and issuer elements are required
attributes. More package-specific attributes can be added.

Vendor, name and version identifies the software the license is
for. Serial identifies the machine the license is for. The issuer
element specifies who issued the license and it is the same type as
the issuer field in X.509 certificates.

The id attribute is a string generated by the issuer of the
license. The issuer needs to make sure it is unique among licenses for
the same software. One way to do it is to use the timestamp of the
license generation machine.

The version attribute can start with "<=" or ">=" to indicate the
license applies to software versions smaller or equal to and greater
or equal to that version, respectively. It means equal without
comparison symbols. Versions are composed by strings separated by dots
(e.g. 1.2.2b). Comparing versions means splitting the strings at the
dots and compare segment by segment. If any segment contain
non-numeric character, alphabetic comparison is used.  Otherwise,
numeric comparison is used.

The effective and expire attributes specify the effective and
expiration time of the license. If the effective attribute is not
specified, the license is effective immediately. If the expire
attribute is not specified, the license never expires. These
attributes are in date-time type specified in RFC822.

There are optional displayVendor, displayName and displayVersion
elements.  These are locale-specific strings displayed to users when
they view the license. If they are not specified, vendor, name and
version attributes are displayed instead.

Renew is an optional element. It contains instruction of how to renew
the license. The remind attribute specifies how many days before the
expiration day should users be reminded to renew. If not specified, it
is defaulted to 7 days.

To keep the license secure and intact, encrypted attributes can be
added. For example, a secret attribute can be introduced. It contains
encrypted license information that can contain those specified in
other attributes. Developers can choose how this secret is encrypted
and what exactly is its content. For example, a developer can encrypt
the vendor, name, version, issuer, effective time, expire time and the
number of maximum users into this secret. Their software then uses
pre-installed X.509 certificate, signed by a trusted certificate
authority, to decrypt this secret. Licenses without similar encrypted
attributes are completely vulnerable to attacks. With this free-style
encryption scheme, software can use virtually any encryption algoritm
and even perform network-based verification.

Licenses are stored as files named
<vendor>-<name>-<version>-<id>. They are located in the repository
under /usr/license. This naming scheme makes the licenses easily
locatable by software that use them. All licenses must be owned by
root and only readable and writable by root.

Licenses are managed through the web user interface. The user
interface supports import, browse, removal and export of licenses.
Export is necessary so that licenses can be redistributed. When an
import can overwrite an existing license, users are asked to see if
they want to do so.

Expired licenses are being garbage collected. A daily run cron job
inspects every license in the reporsitory and remove all those that
are expired. The cron job also read the renew elements and compare
them with the expire attributes. If renew reminder needs to be sent on
that day, then an email is sent to admin with the renew message. On
the day of expiration, another email is sent to admin to notify
him/her about the expiration.

PUSH  (PRD 3.1.16) -- 2wks

Overall, the system works as follows: once every set period of time
the client will initiate a HTTPS connection with each of the servers
specified in it's list of Package Push servers and download a list of
packages avaliable. The client will then check for any new packages on
that list that need to be installed. If it finds any, the client will
download those packages via HTTPS and add them to the list of packages
to be installed. After all the Package Push servers have been
contacted the packages will be installed. Any problems during an
install should be reported to the administrator of the client machine
and (if possible) the administrator of the Package Push server.

The interval at which the system checks with the Package Push
server is configurable in CCE to any value.  Care should be taken
when this value is set not to overload the Package Push server or
to make the interval so large as to make push useless.

Package Push should be shipped disabled by default and should not come
with any web-accessable UI.  The only way one to turn it on should be
by installing some kind of package (signed by a trusted
certificate) or via the Cookie Cutter system. Once Package Push
is enabled, there may be a way to disable it via the web-based UI.

The implementation of this system will be as an add-on to BlueLinQ.
The only appreciable changes will be the increased emphesis on data
integrity (required HTTPS connections, etc.) and the automatic install
of any packages found to be avaliable. It is suggested the reader be
familer with the Carmel specification, section 3.14 (Software update),
the workings of the current version of BlueLinQ and the section in
this specification dealing with BlueLinQ security as all that is
presented here are the new features.

Since Package Push is being implemented as an add-on to the current
BlueLinQ system we can take advantage of the work already done on
BlueLinQ servers.  In fact, the difference between a Package Push
server and a BlueLinQ server are all admistrative.  The Package Push
list of packages must be seprate from the BlueLinQ list. Each Package
Push server must be run on a webserver with SSL avaliable and must
only be available through SSL.  The SSL certificate for the Package
Push server cannot be self-signed, but must be signed by one of the
trusted certificates (see the BlueLinQ security for more on SSL
certificates and the BlueLinQ trust model). 

Data Integrity
In order to help prevent malicious packages from being installed, all
data transfered at the direction of a Package Push server must be sent
using SSL.  This will verify the identity of the server and
maintain the integrity of the package during transit.

Automatic Install
After a package has been downloaded and verified it will then be
automaticly unpacked and installed.  The unpacking and installing are
identical to the BlueLinQ processes.  The actual installs occur in
descending order according to the number of "Obsoletes" entries in a
package.  After each package is installed the list of packages to be
installed should be reviewed with the newly installed package in mind.
Any errors encountered during the unpacking and installation phases
should be reported to the administrator of the server from which the
package was downloaded, as well as the administrator of the


WebDAV stands for "Web-based Distributed Authoring and Versioning". It
is a set of extensions to the HTTP proptocol which allows users to
collaboratively edit and manage files on remote web servers.

Several companies have incorporated WebDAV support into their
- Oracle announced DAV support in IFS.
- HyperWave added WebDAV into its e-learning suite.
- MacroMedia DreamWeaver includes WebDAV.
- Adobe GoLive, Photoshop, InScope, InDesign and InCopy all support

WebDAV is a growing technology with about 25% adoption growth in
December 2000 and 8% in January 2001. However, overall adoption is
still small compared to Frontpage (19.29% vs. 0.59%).

To conclude, Cobalt machines do not need to support WebDAV in 2001
timeframe, but things can change in 2002.



The Linux support for L2TP is l2tpd. This software was released in
November 1998 as an alpha release, but has not been updated
since. Even the web site of this software has not been updated. Also,
L2TP does not support data encryption. It only provide tunneling.



Although carmel had a SCSI port, there was no support for it on the
user interface.  While there are a wide variety of SCSI devices out
there, the most common is a SCSI hard disk.

Given the low number of actual SCSI usage, the cost associated with
doing it and mass storage devices can be supported through USB, no new
user interface is provided for Pescadero.

Personal tools