This page provides the instructions to deploy the SCM application on production environment for high availability. Running services for production requires handling scale and other concerns. This guide details the minimum setup required for the production environment.
Edit me

Pre-requisites

Setup Instructions

  • Copy the contents from this folder to docker-compose folder to /etc directory of the VM’s

  • Run this command to start the mariadb

    $ docker-compose up -d mariadb
    

Zookeeper setup

  • Update the variable ZOO_SERVERS is the docker-compose.yml file
  • Start the zookeeper with the following command
    $ docker-compose up -d  zookeeper
    
  • If the zookeeper is deployed on the multiple server to form cluster, then start the service as follows
eg:
$ docker-compose up -d  zoo1  
  • zoo1 - host1, zoo2 - host2 respectively

ActiveMQ setup

  • If the activemq is deployed on multiple hosts, then navigate to /etc/docker-compose/activemq/conf/activemq.xml file and uncomment the following and provide the respective zookeeper hostnames
    <!--
          <persistenceAdapter>
              <replicatedLevelDB directory="${activemq.data}/"
                                 replicas="2"
                                 bind="tcp://0.0.0.0:0"
                                 zkAddress="host1:2181,host2:2181,host3:2181"
                                 zkPath="/activemq/leveldb-stores"
                                 zkSessionTimeout="20s"
                                 hostname="host1"/>
          </persistenceAdapter> -->
    
  • Start the activemq service as below
    $ docker-compose up -d activemq
    

Redis setup

  • Start redis and sentinel on all servers
    $ docker-compose up -d redis
    $ docker-compose up -d sentinel
    
  • When using the redis cluster, Only one redis should be master and rest should be slave. To make redis as slave, run this command
    $ docker exec -it redis bash
    $ redis-cli 
    $ SLAVEOF <master host> 6379
    $ exit
    

    Hadoop setup

  • Start the hadoop by running this command
    $ docker-compose up -d hadoop
    
  • Format the hadoop namenode
    $ docker exec -it hadoop bash
    $ /usr/local/hadoop-2.3.0-cdh5.1.3/bin/hadoop namenode -format
    $ exit
    
  • Restarting the hadoop and make sure namenode is started
    $ docker restart hadoop
    $ docker exec -it hadoop jps
    

Callisto setup

Callisto requires schema file to be updated in Mongo. Follow the instructions below.

$ cd scm/schema
  • Install nodejs, if already exists skip this step.
    $ yum install nodejs
    
  • Install mongo client
    $ npm install mongodb@2.2.33
    
  • Execute the callisto-schema node application
    $ nodejs callisto-schema.js -u
    

Nginx setup

  • Go to /etc/docker-compose/nginx/nginx.conf file and change the mydomain.com to the respective domain/hostname to the respective domain/hostname on multiple place in the file
  • Update the proxy_pass property to point to correct service name and port
proxy_pass http://<hostname>:3000; - mobile api service

proxy_pass http://<hostname>:<port>; - asset monitoring service

proxy_pass http://<hostname>:<port>; - logistimo service

proxy_pass http://<hostname>:14000/webhdfs/v1; - hadoop service
  • Go to /etc/docker-compose directory
  • Change the domain name in docker-compose.yml file for the following variable
- LOGI_HOST=mydomain.com

Add following variable after - ORIGINS=* in docker-compose.yml file

- ROOT_HOST_NAME=mydomain.com

Deployment using docker compose

  • Update the environment variables in the docker-compose.yml file to point to the right values.
  • Deploy the microservices with the bellow command
    docker-compose up -d < service name >
    
  • Please use below service names for the deployment of microservices
    • locations
    • approvals
    • mapi
    • collaboration
    • communication
    • mongo
    • callisto
    • ams
    • logistimo
    • nginx

Deployment using kubernetes helm

  • Download logistimo production helm package by clicking here. Decompress the helm-prod.zip file and go to helm folder. Update the values.yaml file (~/helm-prod/logistimo/values.yaml) for the docker image versions and respective values for all the variables mentioned in the file..

  • Install Logistimo

$ cd  helm-prod
$ helm install logistimo --name logistimo

Above command will install the complete SCM package with all the dependent services

$ helm ls

This installation will take few minutes. You can check the status of the pods using following cammand

$ kubectl get pods

Once all the services are up and running, get the nodeport of the nginx service to access the logistimo UI

To get the nodeport, run the following command

$ kubectl get svc

Port number exposed for Nginx is 80 for the container and 30006 for the host. Use the port 30006 to connect to UI. If you use the local kubernetes setup, please use the following:

Add the below entry in /etc/hosts file to access the webapp

127.0.0.1 localhost prod.logistimo.com prod-ams.logistimo.com prod-mapi.logistimo.com

If you use the remote kubernetes cluster, you should get the public IP or Loadbalance IP of the cluster and replace the localhost with the respective IP in the /etc/hosts file.

Now, you can get the web app using the following URL in the browser http://prod.logistimo.com:30006

Default accounts

Username Password Role
superuser admin Super user
country_admin admin Administrator
state_admin admin Administrator
state_mngr admin Manager
state_opr admin Operator