Deploying with Docker

The Curity Identity Server is very simple to dockerize. There is currently no delivered docker-container but creating one is a simple task.

Docker is also a suitable distribution method, since nodes can be pre-provisioned in Curity ahead of deployment, so spinning up new containers will scale the cluster at ease. The only requirement is that each node is given a unique name to operate in the environment.

Building a Docker Container

This section describes how to build an auto-installable docker-container from a Curity installation package. Curity ships with an auto-installer that can be used for this purpose.

The steps needed to build a container are:

  1. Prepare the workspace
  2. Adding a docker file
  3. Possibly adding data-base drivers or plugins
  4. Building the container
  5. Run the container

1. Prepare the workspace

Place the Curity Linux installation in an empty workspace.

Listing 68 Prepare a workspace
1
2
3
$ mkdir curity
$ cd curity
$ cp ~/downloads/idsvr-4.1.0-linux-release.tar.gz .

2. Add a Dockerfile to the workspace

Now it’s time to create the Dockerfile. There are no real restrictions on how it should look. Curity requires that libcrypto to be available. We recommend an Ubuntu or CentOS base image but that is not required.

The Dockerfile in this example runs the installation of Curity during the image build. This is important, because the cluster keys will be generated at this time, and will thus be the same on all instances of the image.

Important

For more advanced deployments, it’s advisable to generate one admin image and one runtime image, and copy the cluster runtime keys from the admin image into the build of the runtime image.

Listing 69 Dockerfile for a basic image
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
FROM ubuntu:16.04

EXPOSE 2024
EXPOSE 6789
EXPOSE 8443
EXPOSE 6749

WORKDIR /root

ENV IDSVR_HOME /opt/idsvr
ENV JAVA_HOME $IDSVR_HOME/lib/java/jre
ENV PATH $IDSVR_HOME/bin:$JAVA_HOME/bin:$PATH
ENV SERVICE_ROLE admin

ARG RELEASE_VERSION
ARG PASSWORD

COPY idsvr-$RELEASE_VERSION-linux-release.tar.gz /tmp/idsvr-install.tar.gz
RUN cd /tmp && tar xvzf idsvr-install.tar.gz

# Note: Remember to set $PASSWORD argument when running this builder
RUN /tmp/idsvr-$RELEASE_VERSION/idsvr/bin/unattendedinstall

RUN mv /tmp/idsvr-$RELEASE_VERSION/idsvr $IDSVR_HOME

RUN rm -rf /tmp/idsvr-install*

WORKDIR $IDSVR_HOME

CMD ["sh", "-c", "$IDSVR_HOME/bin/idsvr -s ${SERVICE_ROLE}"]

The ports exposed are the following:

  • 2024 = The SSH Port for the Admin CLI
  • 6789 = The Cluster Communication port, (only needed to be exposed on Admin node)
  • 8443 = The default runtime port for the node
  • 6749 = The default admin WebUI and API port

3. Add Drivers and Resources

To add drivers to the installation, simply add those before the CMD line in the Dockerfile above.

Example of adding mysql drivers would look as follows:

1
2
3
4
...
WORKDIR $IDSVR_HOME
ADD mysql-connector-java-5.1.45-bin.jar $IDSVR_HOME/lib/plugins/data.access.jdbc/mysql-connector-java-5.1.45-bin.jar
...

Drivers and resources can of course be added via volume mounting at later stages as well. It’s up to the company docker strategy to decide.

4. Build the container

Listing 70 Building the docker container
1
2
docker build --build-arg RELEASE_VERSION=4.1.0 --build-arg PASSWORD=SomeRandomPassword \
    -t your-repo/curity:4.1.0 -t your-repo/curity:latest .

This will produce an image with the tags your-repo/curity:4.1.0 and your-repo/curity:latest.

Note

Try to place the image in a company local namespace since Curity official docker images may become available in the future and could collide if a too generic namespace is used.

5. Run the container

Listing 71 Running the container
1
$ docker run -it -p 8443:8443 -p 6749:6749 your-repo/curity:latest

If the node should be clustered on more than one docker-host then the cluster port 6789 needs to be published as well.

Important

Docker images for some Linux distributions have a very large default value for the “open files” limit. In some systems this may cause Curity to attempt to allocate a large amount of memory during startup, eventually causing the startup to fail with an error similar to sys_alloc: Cannot allocate 34359738368 bytes of memory (of type "db_tabs"). In such cases it is recommended to set the limit of open files to a reasonable value (e.g. 1024) in the container definition.

Running with docker-compose

Since Curity has dependencies on data source, it can be convenient to run the entire setup through the same environment. This depends a lot on how the company data-source structure is setup and what is desired from an architectural perspective.

To setup a docker-compose system we can use the Docker image build in the previous section.

docker-compose has some benefits when running containers of making mounting of resources very simple and uniform. Even if only Curity nodes are used it could still add value to the deployment.

Creating a docker-compose.yml file

First we need to define our docker-compose file. This is an example that uses a mysql database and a single node cluster with Curity. In this example we assume that the container has been built elsewhere. It’s possible to have docker-compose do the build we defined in the previous chapter. Please consult the docker-compose documentation for details on that.

Note

To communicate with MySQL the Curity image must have been created with the mysql jdbc driver or the driver needs to be mounted in using the docker-compose file.

Listing 72 A docker-compose example
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
version: '2'
services:
admin:
    image: your-repo/curity:4.1.0
    ports:
    - 8443:8443
    - 6749:6749
     - 6789:6789
    volumes:
    - ./usr/share/templates/overrides:/opt/idsvr/usr/share/templates/overrides
    - ./usr/share/templates/template-areas:/opt/idsvr/usr/share/templates/template-areas
    - ./usr/share/messages/overrides:/opt/idsvr/usr/share/messages/overrides
    - ./usr/share/webroot/custom:/opt/idsvr/usr/share/webroot/custom
    environment:
    - SERVICE_ROLE=admin
    - ADMIN=true
    depends_on:
    - db
    links:
    - db:database

runtime:
    image: your-repo/curity:4.1.0
    ports:
     - 8443:8443
     - 6749:6749
    volumes:
     - ./usr/share/templates/overrides:/opt/idsvr/usr/share/templates/overrides
     - ./usr/share/templates/template-areas:/opt/idsvr/usr/share/templates/template-areas
     - ./usr/share/messages/overrides:/opt/idsvr/usr/share/messages/overrides
     - ./usr/share/webroot/custom:/opt/idsvr/usr/share/webroot/custom
    environment:
     - SERVICE_ROLE=runtime
     - ADMIN=false
    depends_on:
     - db
    links:
     - db:database

db:
    image: mysql:latest
    restart: always
    environment:
    MYSQL_ROOT_PASSWORD: rootroot
    volumes:
    - ./mysql/mysql-create_database.sql:/docker-entrypoint-initdb.d/create-tables.sql

The Database section

The MySQL image was bootstrapped with a create-table script. This is the script found in $IDSVR_HOME/etc/mysql-create_database.sql. This script assumes that a database exists and is active. If this is nto the case, it can be created automatically by adding the following to the beginning of the script:

Listing 73 Add create database to the MySQL script
1
2
3
CREATE database se_curity_store;
USE se_curity_store;
...

Depending on how the database is defined, the db section of the docker-compose.yml file should be updated (e.g., with the addition of MYSQL_DATABASE=se_curity_store). In this example the official MySQL container is used, but a new container named db is created that has the Curity database created and initialized.

Volumes

The docker-compose file mounted a few volumes. This is of course optional. Common volumes to mount are

  • Template overrides
  • Template areas (per client overrides)
  • Localization overrides
  • Static content

Other things that can be mounted are of course drivers and plugins if these are not part of the original container.

Running with docker-compose

To run the containers simply start docker-compose

Listing 74 Running with docker-compose
1
$ docker-compose up -d