Production Configuration Guide

This guide is a catch-all for configuration, performance, and enterprise integration issues which are not a routine part of an initial installation, but likely concerns for any institution working toward production.

Table of Contents

    Optional Configurations
  1. Running Tomcat as non-root user
  2. HTTP proxy
  3. File-based Content Hosting
  4. Apache with Sakai
  5. Sakai and SSL
    Integrating Sakai
  1. Provider Overview
  2. Provider Registration
  3. Changing The Provider Module's Components
  4. Replacing the Provider Module
  5. Providers, Single Sign-on, and WebDAV
  6. CASifying Sakai
    Clustering Sakai
  1. Configuring your Cluster
  2. Starting and Stopping your Cluster
  3. Monitoring your Cluster
  4. Changing the Cluster

Optional Configurations

  1. Running Tomcat as non-root user

    *nix systems can use some iptables magic to allow Tomcat to run as a non-root user.  The iptables must first pre-route to redirect ports 80, 443 and 25 to 8080, 8443 and 8025

    Linux example:

    *nat
    :PREROUTING ACCEPT [510:80231]
    :POSTROUTING ACCEPT [12:2548]
    :OUTPUT ACCEPT [12:2548]
    -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080
    -A PREROUTING -i eth0 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 8443
    -A PREROUTING -i eth0 -p tcp -m tcp --dport 25 -j REDIRECT --to-ports 8025
    COMMIT
    *filter
    [snip]
    -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT
    -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 8443 -j ACCEPT
    -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 8025 -j ACCEPT

    Then run everything under a user with /sbin/nologin as the shell (you will need to "su -s /bin/bash - sakai" from root to get in as this user, when necessary).

  2. HTTP proxy

    In environments where local network policy or firewalls require use of an upstream http proxy / cache, Sakai needs to be configured to use this. Otherwise, components or services which use http requests, such as BasicNewsService for RSS feeds in the News tool, cannot retrieve data from the target URLs. RSS feeds then do not display in the news tool, and it is not possible to add new news channels.

    This can be fixed by adding these lines to the Sakai local startup script (e.g. /etc/rc.d/sakai), or to a Sakai startup script such as startup.sh or catalina.sh:

    JAVA_OPTS="-DproxySet=true -DproxyHost=cache.some.domain -DproxyPort=8080"
    export JAVA_OPTS
  3. File-based Content Hosting

    By default Sakai stores all its files - including uploaded binaries - in the database.  If you wish to use file based content hosting instead, you must:

    1. Configure your Sakai to do this, and
    2. Run a conversion to bring your files out of your database and into the file system.

    This latter conversion is needed even for a brand new database, since there are some resources shipped with the starting Sakai DB.

  4. Apache with Sakai

    For many reasons, production systems will likely have an Apache web server on the front end to handle HTTP and HTTPS request to Sakai. Apache is generally more trusted to run on the protected ports (80, 443) than Tomcat. It also is more efficient in processing the SSL part of the requests.

    When Apache is handling the requests, it must send them to Tomcat for Sakai processing. This is done by using an Apache - Tomcat connector. This has to be properly configured on the Apache side and the Tomcat side. The necessary Apache configuration is outside the scope of this document (see the Apache-Tomcat connector documentation from apache.org for details), but we can say something about the Tomcat side of the configuration. It involves turning off the http connector and turning on the ajp connector. Simply comment out the parts of the configuration file ($CATALINA_HOME/conf/server.xml) that you don't want, and remove the comments from those that you do. Also make sure the port number set in Tomcat and Apache match, and add the URIEncoding option for proper character handling in Sakai, as in:

    <!-- Define an AJP 1.3 Connector on port 8009 -->
    <Connector port="8009" enableLookups="false" redirectPort="8443" protocol="AJP/1.3" URIEncoding="UTF-8" />
  5. Sakai and SSL

    Most Sakai installations will want to run under HTTP for the SSL secure connections between the browser and the server. This is good to protect the user passwords as well as the data that goes in and out of Sakai.

    We do not recommend using Tomcat SSL support. This SSL is implemented in Java, and is much slower than a native SSL like that found in an Apache module for SSL. Even better, put a hardware SSL handler / load balancer in front of your app servers to take care of the SSL processing.

    See the Apache documentation for information on how to install an SSL plugin to Apache.


Integrating Sakai

  1. Provider Overview

    Providers are the means by which Sakai can poll external sources for its data.  Sakai has basically 3 such providers:


  2. Provider Registration

    Providers are found by the Sakai components that use them by looking for provider components registered under the names:

    org.sakaiproject.service.legacy.user.UserDirectoryProvider (the user directory provider)
    org.sakaiproject.service.legacy.realm.RealmProvider (the realm provider)
    org.sakaiproject.service.legacy.coursemanagement.CourseManagementProvider (the course management provider)

    The trick is to make sure that one component gets registered in your Sakai setup with each of these names, no more.

    If you do not want a provider, set things up so that there is no component registered with the provider name. The provider client code will properly handle not having one defined.

  3. Changing the Provider Module's Components

    One way to change the provider is to modify the sakai source code file that controls the provider registration in the Provider module. The file, a Spring bean definition file, is found here:

    sakai-src/providers/component/src/webapp/WEB-INF/components.xml

    See this file for more details - examples of an LDAP and Kerberos providers are included in comments. Edit this file to pick which Provider module provider you want to use, and re-deploy Sakai.

  4. Replacing the Provider Module

    The other way to change the provider configuration is to completely remove the Provider module from your local sakai-src folder. Then you need to create a new module / project with the providers you write to satisfy the three provider APIs. You register your new components with the proper provider names, and include your module in the Sakai build and deploy.

    At runtime, the Sakai provider module will be missing, your new module will be present, and when the provider client components get started up they will be wired to your new provider components.

  5. Providers, Single Sign-On, and WebDAV

    If you integrate Sakai in some sort of single sign-on environment you will need to also make your provider for users work with the same environment. Most requests come in from browsers and will trigger the single sign-on for authentication. But our WebDAV support relies on the internal authentication system in Sakai, and the WebDAV protocol itself cannot handle the re-directs that single sign-on systems often require, so your provider must manage this burden.

    Sakai also has a direct login path (/portal/xlogin) to bypass the single sign-on and invoke internal authentication directly.

  6. CASifying Sakai (see also the page on Confluence devoted to CASifying Sakai)

    This can be done in one of two ways: by either an Apache module or a servlet filter.

      With an Apache module:
    1. Install mod_cas (or its equivalent for another SSO) under Apache.
    2. Edit Apache's httpd.conf and add this:

      AuthType CAS
      Require valid-user
    3. Edit Tomcat's server.xml

      Disable Tomcat's container authentication by adding the following parameter to the JK2 connector configuration:

      tomcatAuthentication="false"

      When you're done, the connector declaration should look something like this:

      <Connector port="8009"
      enableLookups="false" redirectPort="8443" debug="0"
      tomcatAuthentication="false" URIEncoding="UTF-8"
      protocol="AJP/1.3" />
    4. Edit sakai.properties with:

      top.login = false
      container.auth = true
      With a servlet filter:
    1. Obtain a filter and install the appropriate jar into the $TOMCAT/webapps/sakai-login/WEB-INF/lib/.
    2. Configure sakai-login's web.xml ($TOMCAT/webapps/sakai-login/WEB-INF/web.xml):

      First, add your filter configuration, usually after any others:

      [...]
      <filter-mapping>
      <filter-name>sakai.request.container</filter-name>
      <servlet-name>sakai.login.container</servlet-name>
      <dispatcher>REQUEST</dispatcher>
      </filter-mapping>

      <!-- begin servlet filter -->
      <filter>
      [...params...]
      </filter>

      <filter-mapping>
      <filter-name>my-filter</filter-name>
      <url-pattern>/container</url-pattern>
      </filter-mapping>
      <!-- end servlet filter -->

      <servlet>
      <servlet-name>sakai.login</servlet-name>
      <servlet-class>org.sakaiproject.tool.login.LoginTool</servlet-class>
      [...]

      next, add another filter to force requests for /container through Sakai's RequestFilter.  This filter must be placed close to the top of web.xml near:

      [...]
      <filter-class>org.sakaiproject.util.RequestFilter</filter-class>
      </filter>

      <!-- Force request for /container through the request filter -->
      <filter-mapping>
      <filter-name>sakai.request</filter-name>
      <url-pattern>/container</url-pattern>
      <dispatcher>REQUEST</dispatcher>
      <dispatcher>FORWARD</dispatcher>
      <dispatcher>INCLUDE</dispatcher>
      </filter-mapping>

      <filter>
      <filter-name>sakai.request.container</filter-name>
      <filter-class>org.sakaiproject.util.RequestFilter</filter-class>
      [...]

      If your tests (see below) fail, try substituting "/*" for "/container" in the above stanza.

    3. Restart Sakai and test: clicking on the "Login" link should redirect you for authentication and then log you into Sakai.

Clustering Sakai

You may need to run multiple Sakai application servers in a cluster to support your user load. At this time, we don't have a good feel for how many users each application server can support, but it's likely that an application server class machine should be able to handle a load of 100 concurrent users or more. You must experiment with loads and your environment to see what you will need.

*Note: load testing of Sakai is schedule to occur around the release time - look for information about load testing and results on Sakai Collab in the Sakai Development site).

  1. Configuring your Cluster

    Sakai clusters by running a number of application servers with the same version of Sakai on each. The only difference between them is the configuration value "serverId", usually set in a file in sakai.home called "local.properties". This is an extension to sakai.properties that is optional. If present, it will be used. The advantage of using this file is that it is the only file that needs to be different between the clustered app servers - sakai.properties and the rest of Sakai are the same.

    You need a front end "sprayer" or load balancer to take requests and distribute them among the machines in the cluster. This must preserve "Session Stickiness". This means that once a user establishes a Sakai session, they remain on the same app server until they logout. Sakai does not support session sharing or serialization.

    The machines in the cluster must also share the same back end database. MySQL or Oracle are acceptable for this.

  2. Starting and Stopping your Cluster

    Simply start and stop each app server in the normal way. It registers with the cluster on startup, and unregisters on shutdown. If an app server ends without proper shutdown, the other app servers will notice this and do some cleanup of sessions left open by the missing server.

  3. Monitoring your Cluster

    The Admin's OnLine tool shows who's on and what app server they are connected to. This is one way to see that your cluster is working. Note that if an app server is running but has no active sessions, it will not show up in the list.

  4. Changing the Cluster

    You can add machines to the cluster, and remove machines from the cluster, without bringing your entire service down. This is useful if you have a load increase and want to bring more machines on-line temporarily to handle it. It can also be used to rotate machines out for maintenance, and back in, without service interruption.

    When removing a machine from the cluster, first configure your front end load balancer to stop sending new requests to that app server. Then watch for users to drain off of the app server. The Admin's OnLine tool, or the access logs can be monitored to see that everyone's off the machine. At that point it's safe to shut it down.