Tuesday, July 23, 2019

How to dockerize a spring boot application


Spring boot helps us to create application very quickly. Docker provides us a way to "build, ship and run" our applications. In a world of Microservices, combining these two gives us powerful ways to create and distribute our Java applications.

I assume you have enough Spring boot and Docker knowledge and want study further to dockerize Spring boot applications.


Let's create a simple rest service and dockerize it.

Step 1: Create Spring boot application

Go to https://start.spring.io and fill it as you want. Then add Spring Web Starter as a dependency. This is enough for us create a simple rest service. Download and extract the application. Then add below controller (or anything you like).
package com.slmanju.springbootdocker;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

public class HomeController {

    @GetMapping(value = { "", "/"})
    public String index() {
        return "Welcome to spring boot docker integration";

    @GetMapping(value = "/hello")
    public String hello() {
        return "Hello world";


Update the application.properties with port number.
server.port = 8081
Create another property file named application-container.properties and update it as below. This is to demonstrate Spring profile use in docker container.
server.port = 8080

You can test the application by runninig;
mvn clean spring-boot:run
curl http://localhost:8081/

Similarly we can run the jar file separately. This is what we need in our dockerfile.
mvn clean package
java -jar target/spring-boot-docker-0.0.1-SNAPSHOT.jar

Step 2: Create Dockerfile

I have used Maven as my build tool. Maven uses target/ path as the build location. For Gradle it is build/libs.
FROM openjdk:8-jdk-alpine
ADD target/spring-boot-docker-0.0.1-SNAPSHOT.jar app.jar
ENTRYPOINT ["java", "-jar", "-Dspring.profiles.active=container", "app.jar"]

What is this file doing?

  • Get openjdk alpine image.
  • Add our spring boot application as app.jar
  • Expose 8080 port (this is port we used in application-container.properties)
  • Entrypoint for the application. I'm passing spring profile to match the exposed port.

Step 3: Create a docker image using our Dockerfile

To create the docker image we need to create jar file first.
mvn clean package
Now we can create the docker image using that jar file.
docker build -f Dockerfile -t hello-docker .

Step 4: Verify image

Verify whether your image is created.
docker image ls

Step 5: Create a container

Our docker image is created. We can create a container using it now.
docker container run -p 8082:8080 hello-docker

Step 6: Test it

Go to web browser and go to localhost.
curl http://localhost:8082/
curl http://localhost:8082/hello

Great! We have containerized our spring boot application.

There is more. We can improve this.

Uppack the generated fat jar into target/extracted (or your desired location).
mkdir target/extracted
cd target/extracted
jar -xf ../*.jar

As you can see, Spring boot fat jar is packaged as layers to separate external dependencies and classes. As we know docker containers are alos layered. We can use thsi to improve our containers.

External dependencies will not change often. So, we can add in as the first layer. We can add META-INF into another layer. We add classes as the last layer since it will change over time. Why this is important?

Docker will cache layers. Since our lib (and META-INF) layers do not change very often our build will be faster. Also when we use dockerhub to push and pull our image, that will also faster since we only need to pull our classes layer.
Also in the Dockerfile, I have hard coded the main class boost the start up.
FROM openjdk:8-jdk-alpine
ARG APP=target/extracted
COPY ${APP}/BOOT-INF/lib /app/lib
COPY ${APP}/BOOT-INF/classes /app
ENTRYPOINT ["java", "-cp", "app:app/lib/*", "-Dspring.profiles.active=container", "com.slmanju.springbootdocker.SpringBootDockerApplication"]

Now you can create a container (docker container run -p 8082:8080 hello-docker) and test it.


In this article I have discussed how to containerized a spring boot application. There are Maven and Gradle plugin to help you on this. But creating containers in this way is better for learning. If you are like me, want to explore docker (and spring boot) this will help you to get started. So what are waiting for? go and create your container!



Monday, June 24, 2019

How to use Spring Boot with MySQL database


Spring Framework simplifies working with databases by auto configuring connections, handling transactions, using ORM tool like hibernate, abstract sql by Spring Data Repository. We are going to focus on how to connect MySQL database with Spring Boot application.

Sprint Boot has many defaults. For databases, H2 in-memory database is the default database. It auto-configures in-memory databases even without connection url. Those are good for simple testing. For production use we need to use a database like MySQL.

Spring Boot selects HickariCP Datasource due to it is performance. When spring-boot-starter-data-jpa dependency in classpath it automatically pick HickariCP.

complete source code this blog post is here.

How to configure a database

Obviously, to use a database in our application we need;
  • Database driver to connect to database
  • Connection url
  • Database username and password
In Spring Boot application we need to provide atleast connection url, otherwise it will try to configure in-memory database. Using connection url it can deduce the database driver to be used. So we do not need to configure database driver.

To configure above properties, Spring externalize configuration properties using spring.datasource.*.
spring.datasource.url = jdbc:mysql://localhost/test
spring.datasource.username = dbuser
spring.datasource.password = dbpassword
spring.datasource.driver-class-name = com.mysql.jdbc.Driver // no need

If we need more fine tuning we can use other configuration properties like spring.datasource.hikari.*.

How to auto-create a database

If we like to let the application create the database for us, we can use spring.jpa.hibernate.ddl-auto property. This value is none for MySQL and create-drop for embedded databases.
spring.jpa.hibernate.ddl-auto = create

Additionally if schema.sql (DDL) and data.sql (DML) files are in resouces folder Spring Boot can pick those and populate database. We can change default location by using schema and data properties.
spring.datasource.initialization-mode = always
spring.datasource.schema = classpath:/database/schema.sql # Schema (DDL) script resource references.
spring.datasource.data = classpath:/database/data.sql # Data (DML) script resource references.

Using above knowledge let's create a simple application which connects to MySQL database.

Create our database

I like to create database separately. Connect to MySQL database and create our database.
> mysql -uroot -proot
> create database book_store;
> use book_store;
// use schema.sql and data.sql to populate database

Create the project

Go to https://start.spring.io and select spring-boot-starter-data-jpa, spring-boot-starter-web, mysql-connector-java and lombok dependencies.
pom.xml will be look like this.




Configure the database

Create applicatin.yml and add below configuration properties.
      ddl-auto: validate
    url: jdbc:mysql://localhost:3306/book_store
    username: root
    password: root

This is all we need to connect to MySQL database. Let's create our domain object and repository.
public class Book implements Serializable {

    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Integer id;
    private String title;
    private String author;


public interface BookRepository extends JpaRepository<Book, Integer> {

Service layer

public class BookServiceImpl implements BookService {

    private final BookRepository bookRepository;

    public BookServiceImpl(BookRepository bookRepository) {
        this.bookRepository = bookRepository;

    public List<Book> findAll() {
        return bookRepository.findAll();

    public Book findById(Integer id) {
        return bookRepository.findById(id).orElse(null);

    public Book save(Book book) {
        return bookRepository.save(book);

    public void delete(Integer id) {

    public Book update(Book book) {
        return bookRepository.save(book);


Rest controller layer

public class BookController {

    private final BookService bookService;

    public BookController(BookService bookService) {
        this.bookService = bookService;

    @GetMapping(value = "")
    public List<Book> findAll() {
        return bookService.findAll();

    @GetMapping(value = "/{id}")
    public Book findById(@PathVariable Integer id) {
        return bookService.findById(id);

    public Book save(@RequestBody Book book) {
        return bookService.save(book);

    @DeleteMapping(value = "/{id}")
    public void delete(@PathVariable Integer id) {

    public Book update(@RequestBody Book book) {
        return bookService.save(book);


Now start the application and try below cURL commands.
curl -X GET http://localhost:7070/

curl -X POST \
  http://localhost:7070/ \
  -H 'Content-Type: application/json' \
  -d '{
    "title": "Java Persistence with Hibernate",
    "author": "Gavin King"

curl -X GET http://localhost:7070/1

That is the basics you want know when working with relational database with Spring Boot. You can use below references for further study.



Sunday, June 23, 2019

Microservices - Distributed tracing with Spring Cloud Sleuth and Zipkin


Microservices are very flexible. We can have multiple microservices for each domain and interact as necessary. But it comes with a price. It becomes very complex when the number of microservices grow.
Imagine a situation where you found a bug or slowness in the system. How do you find the root cause by examinig logs?

  • Collect all the logs from related microservices.
  • Pick the starting microservice and find a clue there using some id (userid, businessid, etc).
  • Pick the next microservice and check whether the previous information are there.
  • Keep going until you find which microservice has the bug.

I have followed that practise in one of my previous projects. It is very difficult and takes a lot of time to track an issue.
This is why we need to use distributed tracing in microservices. One place where we can go and see the entire trace.

It helps us by;
  • Asign unique id (correlation id) to all request.
  • Pass unique id across all the microservices automatically.
  • Record time information.
  • Log service name, unique id, span id.
  • Aggregate log data from multiple microservices into single source.

Spring Cloud Sleuth

Spring Cloud Sleuth implements a distributed tracing solution for Spring Cloud. We can capture data simply using logs or send data to a collector service like Zipkin. 
Just by adding the library into our project Spring Cloud Sleuth can;
  • Add correlation id to all request if it doesn't exist.
  • Pass the id with outbound call.
  • Add correlation information to Spring's Mapped Diagnostic Context (MDC) which internally use SL4J and Logback implementations.
  • If the collector service is configured, it can pass the log information to it.

Adding Spring Cloud Sleuth

This is very simple. We need to update our pom.xml files to include the Sleuth dependency. Let's update api-gateway, service-a and service-b pom files with this.

Now re-start applications and visit http://localhost:7060/api/service-a/

Look at the logs. In service-a you will be able to see;
2019-06-23 14:45:41.024  INFO [service-a,50937d2183890546,fc6079712896add8,false] 15445 --- [io-7000-exec-10] c.s.s.controller.MessageController       : get message

In service-b
2019-06-23 14:45:41.033  INFO [service-b,50937d2183890546,260506a7161eca33,false] 15654 --- [io-7005-exec-10] c.s.s.controller.MessageController       : serving message from b

You can see in logs it has [service_name, traceId, spanId, exportable] format. Both logs have same correlation id printed. Exportable is false because we haven't added our log tracing server yet.
Let's add it.

Zipkin Server

Zipkin is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in microservice architectures.

We used Sleuth to add tracing information in our logs. Now we are going to use Zipkin to visualize it.

Zipkin server as a Docker container

We can find the docker command from the official site to run Zipkin server as a docker container.
docker container run -d -p 9411:9411 openzipkin/zipkin

Now our Zipkin server is available at http://localhost:9411/zipkin/

Let's add Zipkin dependency to api-gateway, service-a and service-b.


Then we need to specify where to send our tracing data. Update application.yml in api-gateway, service-a and service-b as below.
    name: service-a
    baseUrl: http://localhost:9411/
      probability: 1.0

Re-start applications and visit http://localhost:7060/api/service-a/
You will be able see exportable true this time.

Now visit http://localhost:9411/zipkin/ and click on 'Firnd Traces'. You will be able see tracing information and click on it.

Now if you click on a service name it will give you more information like below.

That is it for now. You have your base tracing module to play with.



Saturday, June 22, 2019

Cracking Java8 Stream Interview Question


I have faced many interviews (too many to be frank) in my career as a software developer. In most of those interviews I have been asked to write a pseudo code for a given problem and implement it in Java. With Java8, this implementation mostly should be in Java8.
When I look back all those questions it can be simplified as below. (difficulty may be vary though).

  • Iterate over a given collection (stream)
  • Filter the given data (filter)
  • Transform into another format (map, reduce)
  • Collect data into a collection (collect)
  • or End the stream (forEach, min, etc)

Is this familiar to you? This should be. This is what we do in our daily work. But, if you are blindly using it you will see it as a difficult question to answer.
You need to have good understanding about intermediate and terminal operations. And, you need to really practise and use in daily work. It is meaningless to pass an interview without knowing these.

Let's dive into some real questions.

Find the youngest male student by given list

Let's divide this into smaller parts.
- youngest -> min
- male -> filter
- list -> iterate
Student youngestmale = students.stream()
        .filter(student -> student.gender.equals("male"))
        .min((s1, s2) -> s1.age - s2.age) //.min(Comparator.comparingInt(s -> s.age))

Find all numbers divisible by 3

This is an easy question. But you may remember IntStream. And also if you are asked to how to collect the result into a list you need to remember to convert int to Integer. For this IntStream has boxed operation. Let's break down it.
- divisible -> filter
IntStream.rangeClosed(0, 25)
        .filter(number -> number % 3 == 0)

// collect the result
List result = IntStream.rangeClosed(0, 25)
        .filter(number -> number % 3 == 0)

Find the sum of even number's power of two

This has multiple answers. It gets little tricky when you asked not to use sum operation. But still we have the same format.
- even numbers -> filter
- power of two -> map
- sum -> sum (doesn't exist in Stream but in IntStream)
- power of two and sum -> reduce
// method 1
int sum = IntStream.rangeClosed(0, 5)
        .filter(number -> number % 2 == 0)
        .map(number -> number * number)

List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5);

// method 2
int sum2 = numbers.stream()
        .filter(number -> number % 2 == 0)
        .mapToInt(number -> number * number) // convert to IntStream

// method 3
int sum3 = numbers.stream()
        .filter(i -> i % 2 == 0)
        .reduce(0, (result, number) -> result + number * number);

Find the average marks of a student

This looks difficult at first but very simple. It is because IntStream has average operation. We need to covert Stream into IntStream because;
- Autoboxing has a performance impact.
- sum, average operations are not in normal stream.
OptionalDouble average = subjects.stream()

What do you think? Do you have any interesting interview question.

Friday, June 21, 2019

Angular - Let's create a starter project


Angular has many things to learn though we can skip those and directly create simple project. It will be harder to study all the features first. So here, i'm going to directly create a simple starter project.

This is the end result of this tutorial.

Install Angular CLI

npm install -g @angular/cli
ng version

Create a project

ng new angular-store --routing --style=scss
cd angular-store
npm install

This will create a basic app structure.

Run the project

ng serve --open

# short form
ng s -o

With zero code, we have a running template. That's some power.

Angular Material

Let's add material UI design into our project. For more information visit https://material.angular.io/
Below command will add angular material into our project and update neccessary files.
ng add @angular/material

We want to use material design component in our project. To add material components in a single place let's create a separate module named material and update it with neccessary materaial modules.
ng generate module material --flat

Now, we need to update app.module.ts to import our module.
import { MaterialModule } from './material.module';

  imports: [

Making responsive UI

I'm going to use bootstrap css grid to create resposive user interface.
npm install --save bootstrap

Update angular.json file's style section to include bootstrap grid.
"styles": [

Create an app component

ng generate component home --module=app --spec=false

# short form
ng g c home --module=app --spec=false
ng g c about --module=app --spec=false

Adding a menu

Update app.component.html with below code. You can visit https://material.angular.io/components/toolbar/overview for more information.
<mat-toolbar color="primary">
  <span>Angular Store</span>
  <span class="spacer"></span>
  <button mat-button>Home</button>
  <button mat-button>About</button>


We need to import related modules in our material module.
import { MatButtonModule } from '@angular/material/button';
import { MatToolbarModule } from '@angular/material/toolbar';

Adding routes

When we create the project with --routing option, app-routing.module.ts is already created. What we need to do is specify our routes.
import { HomeComponent } from './home/home.component';
import { AboutComponent } from './about/about.component';

const routes: Routes = [
    path: '',
    component: HomeComponent
    path: 'about',
    component: AboutComponent

Then, we need to update our menu bar to our routes.
  <button mat-button [routerLink]="['/']">Home</button>
  <button mat-button [routerLink]="['/about']">About</button>

Showing products

This will be just hard coded for loop. We are going to use material card to show our content. https://material.angular.io/components/card/overview
Update home.component.js with below variable.
items = new Array(10);

Update home.component.html with below content.
<div class="row">
  <div class="col-xs-12 col-sm-6 col-md-4 col-lg-3 mt-10" *ngFor="let item of items">
        <img mat-card-image src="/assets/icon.png" alt="" />
        <mat-card-title>Lorem, ipsum dolor.</mat-card-title>
        <p>Lorem ipsum, dolor sit amet consectetur adipisicing elit. Aliquid, sapiente?</p>
        <button mat-button color="primary">Add To Cart</button>
        <button mat-button color="primary">Read More</button>

Update styles.scss with mt-10 class.
.mt-10 {
    margin-top: 10px;

That's end of it. Even though we didn't touch deeper into Angular, this will give you something to play with.

Wednesday, June 12, 2019

Microservices - Api Gateway


I was too busy past couple of days and lost my way of continuing Microservices series. So I decided to add simple dummy services service-a and service-b. Other than that, there is no change except the github repo.


In Microservices architecture we are dealing with many apis which work along or work with other services. Imagine that we are going to create a mobiile client using these client it will be very difficult to manage all services by that client. Or, imagine we expose our apis so that anyone can create their own client.
This is a good place to introduce a another service which act as a gateway to all other services. Third party clients will know only about this api.
Not only this, we can solve some other problems using Api Gateway.

  • Single entry point to all the services.
  • Common place to log request and responses.
  • Authentication users in single place.
  • Rate limits.
  • Add common filters.
  • Hide internal services.

To create our api gateway, we are going to use Netflix Zuul.
Just like other Spring Boot libraries, this is also just a matter of adding the library and add related configurations.

First of all let's generate the project by adding necessary dependencies.

You can this block in maven pom

Then we need to mark this service as our Api gateway by adding @EnableZuulProxy.
public class ApiGatewayApplication {

    public static void main(String[] args) {
        SpringApplication.run(ApiGatewayApplication.class, args);


Now our gateway is ready. Let's add the port the service name in application yaml file.
  port: 7060

    name: api-gateway

Then we need to tell where is the discovery service by adding Eureka properties.
    registerWithEureka: true
    fetchRegistry: true
      defaultZone: http://localhost:7050/eureka/

If we start our services now, others services are available by their service name.

Let's override this and add our own name.
  ignored-services: "*"
      path: /api/service-a/*
      serviceId: service-a

Now our service is avaible at http://localhost:7060/api/service-a/

Wednesday, May 1, 2019

Kafka - Spring Boot


In this blog post i'm going to explain how to integrate Kafka with Spring boot. We use Spring boot configuration to send Kafka message in String format and consume it. Let's begin.
(complete example can be found here.)

Starting up Kafka

First of all we need to run Kafka cluster. For this i'm using landoop docker image.
Here is the docker command to run landoop docker container.
docker container run --rm -it \
-p 2181:2181 -p 3030:3030 -p 8081:8081 \
-p 8082:8082 -p 8083:8083 -p 9092:9092 \
-e ADV_HOST= \


Generate the application

I'm using Intellij idea IDE to generate the Spring boot application. I have selected web, lombok and
kafka dependencies.
Let's rename application.properties to application.yml to use yaml format.
Here is my application.yml file configuration values.
  port: 9000

      bootstrap-servers: localhost:9092
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
      bootstrap-servers: localhost:9092
      group-id: test-id
      auto-offset-reset: earliest
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
It is pretty simple. I have specified running Kafka instance url in boostrap-servers. For key and value serializers I use in built StringSerializer. To deserialize the message I use StringDeserializer provided by kafka.
  • bootstrap-servers - kafka server instance
  • kafka.consumer.group-id - consumer group id which will be used by consumers.
  • kafka.consumer.auto-offset-reset - consumers will start reading messages from the earliest one available when there is no existing offset for that consumer.

Kafka Configuration

We already have configured basic properties. In addition to that we are going to create our Topic.
public class KafkaConfiguration {

    public static final String TOPIC_NAME = "kafka-spring";

    public NewTopic topic() {
        return new NewTopic(TOPIC_NAME, 3, (short) 1);

Kafka's AdminClient bean is already in the context. It will create a topinc using NewTopic instance which we have given kafka-spring as the topic name, number of partions as 3 and replication factor is 1.

Produce Messages

Spring provides easy to use KafkaTemplate to send messages to Kafka. We need to provide topic name and our message.

public class KafkaMessageProducer {

    private static final Logger LOGGER = LoggerFactory.getLogger(KafkaMessageProducer.class);

    private final KafkaTemplate kafkaTemplate;

    public KafkaMessageProducer(KafkaTemplate kafkaTemplate) {
        this.kafkaTemplate = kafkaTemplate;

    public void send(String message) {
        LOGGER.info(String.format(":: Produce Message :: %s", message));
        kafkaTemplate.send(TOPIC_NAME, message);


Consume Messages

With Spring's KafkaListener, we can easily consume messages by specifying topic name and group id.

public class KafkaMessageConsumer {

    private final Logger LOGGER = LoggerFactory.getLogger(KafkaMessageConsumer.class);

    @KafkaListener(topics = TOPIC_NAME, groupId = "test-id")
    public void consume(String message) {
        LOGGER.info(String.format(":: Consume Message :: %s", message));


Test it

Now all set. Let's create a simple end point to send few messages.

public class MessageController {

    private final MessageService messageService;

    public MessageController(MessageService messageService) {
        this.messageService = messageService;

    public void sendMessage(@RequestBody Message message) {


curl -X POST \
  http://localhost:9000/send \
  -H 'Content-Type: application/json' \
  -d '{ "text": "Hello Kafka" }'

See the console. You should be able to see something like this.

2019-05-01 22:09:28.062  INFO 6045 --- [nio-9000-exec-4] c.s.k.message.KafkaMessageProducer       : :: Produce Message :: Hello World
2019-05-01 22:09:28.069  INFO 6045 --- [ntainer#0-0-C-1] c.s.k.message.KafkaMessageConsumer       : :: Consume Message :: Hello World

Navigate to and select topics where you can see our topic.



Monday, April 22, 2019

Docker - Dockerfile


We used Docker images to create containers multiple times. We used images from Docker Hub to create those containers. Ever wondered how to create a Docker image? Docker can build images automatically by reading the instructions from a Dockerfile.


A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Think of it as a shellscript. It gathered multiple commands into a single document to fulfill a single task.
build command is used to create an image from the Dockerfile.

$ docker build .
You can name your image as well.
$ docker build - my-image .

Let's first look at a Dockerfile and discuss what are those commands.
This is extracted from official MySQL Dockerfile.

FROM debian:stretch-slim

# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -r mysql && useradd -r -g mysql mysql

RUN apt-get update && apt-get install -y --no-install-recommends gnupg dirmngr && rm -rf /var/lib/apt/lists/*

RUN mkdir /docker-entrypoint-initdb.d

ENV MYSQL_VERSION 8.0.15-1debian9

VOLUME /var/lib/mysql
# Config files
COPY config/ /etc/mysql/
COPY docker-entrypoint.sh /usr/local/bin/
RUN ln -s usr/local/bin/docker-entrypoint.sh /entrypoint.sh # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]

EXPOSE 3306 33060
CMD ["mysqld"]

As you can see, this is how you install MySQL in your Linux machine. First we select our OS and install necessary software. Then configure the environment. All those instructions are added into Dockerfile using Docker specific commands.

Dockerfile Commands

  • FROM - specifies the base(parent) image.
  • RUN - runs a Linux command. Used to install packages into container, create folders, etc
  • ENV - sets environment variable.
  • COPY - copies files and directories to the container.
  • EXPOSE - expose ports
  • ENTRYPOINT - provides command and arguments for an executing container.
  • CMD - provides a command and arguments for an executing container. There can be only one CMD.
  • VOLUME - create a directory mount point to access and store persistent data.
  • WORKDIR - sets the working directory for the instructions that follow.
  • LABEL - provides metada like maintainer.
  • ADD - Copies files and directories to the container. Can unpack compressed files.
  • ARG - Define build-time variable.


Both commands serve the similar purposes. Copy files into the image.
COPY let you copy files and directories from the host.
ADD do the same. Additionally it lets you use URL location and unzip files into image.
Docker documentation recommends to use COPY command.


CMD - allows you to set a default command which will be executed only when you run a container without spedifying a command. If Docker container runs with a command, the default command will be ignored.
ENTRYPOINT - allows you to configure a container that will run as an executable. ENTRYPOINT command and parameters are not ignored when Docker container runs with command line parameters.


You declare VOLUME in your Dockerfile to denote where your container will write application data. When you run your container using -v you can specify its mounting point.

Monday, April 8, 2019

Docker - Networking


When we talked about Docker, we said that containers are isolated. Then how do we communicate with our containers. Say we are using MySQL database. It is not useful if we can't access it.

Docker has a network concept. It has several network drivers to work with. Depending on how do we want out container to behave, we can select our network. This help us to communicate container with a container or container with host.

Network commands summary

  • docker network ls - list available networks
  • docker network create - create a network
  • docker network rm - remove a network
  • docker network inspect - inspect a network
  • docker network connect - connect container to a network
  • docker network disconnect - disconnect container from a network

Docker network drivers

  • bridge - This is the default network. When the Docker daemon service starts, it configures a virtual bridge names docker0. When we don't specify the network this is the one docker uses. Docker creates a private network inside the host which allows containers to communicate with each other.
  • host - This tells docker to use host computers network directly.
  • none - Disable network for the container.

Network commands

Just like other Docker commands, it has the same pattern.
docker network

Lets list available network commands.
docker network help

Inspecting a network

Use the inspect command to inspect a Docker network.
docker network inspect bridge

Create a network

We can create our own network using create command.
docker network create mynetwork

Docker prints the id of the created network. Use the inspect command to see properties. You will see that it has used bridge as the driver since we didn't specify a driver to be used. We can specify a driver using -d option.

Remove a network

We can use rm command to remove a network.
docker network rm mynetwork

Connect to a network

By default our containers connect to bridge network. To use another network, we can use --net option when we create the container.
docker container run -it --net=mynetwork nginx

Connect with the world

Now we need to use our containers from the host. There is no meaning of isolating a container if we can't access it.

We can get the exposed port of an image by inspecting it. Issue the inspect command and see the line for ExposedPorts

$ docker image inspect nginx
            "ExposedPorts": {
                "80/tcp": {}

We can use -p or --publish option to bind this port to the host's port when running an image.
$ docker container run -it -p 81:80 nginx
$ docker container run -it --net=mynetwork -p 81:80 nginx

Now we can access this in our browser.

We can get the containers port using port command
docker port <cotainer_name/id>

Now when we inspect the container, we can see that it has attached to host's port.
            "Ports": {
                "80/tcp": [
                        "HostIp": "",
                        "HostPort": "81"

Docker - Volumes


Sharing is caring. When the container is running/down or removed we need to access the data within it. Be it a database, web application logs it needs to share some form of data with host or with the other containers. Docker provides volume to achieve this.

Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.

Volume commands

  • docker volume create - create a volume
  • docker volume ls - list available volumes
  • docker volume remove - remove a volume
  • docker volume prune - remove all unused volumes
  • docker volume inspect - inspect a volume

Create a volume

docker volume create my-volume

List volumes

docker volume ls

Inspect a volume

docker volume inspect my-volume

Remove a volume

docker volume rm my-volume

Remove all unused volumes

docker volume prune

Start a container with a volume

We can start a container with a volume using --mount or -v flag. As in the docs, New users should try --mount syntax which is simpler than --volume syntax.
If the volume does not exist, Docker creates the volume for us.

docker container run -d \
--name my-nginx \
--mount source=my-volume,target=/app \



Docker - Images and Containers


Docker image to container(s)


An image is a read-only template with instructions for creating a Docker container. It is a combination of file system and parameters. Often, an image is based on another image with some additional customization.
We can use existing images or create our own images.


A container is a runnable instance of an image. We can create as many as we want from an image. Container is isolated from the host by default. We can modify it's behavior using network, volume, etc.
When a container is created, we can stop, restart, remove it.

Download an Image

We can download a Docker image using 2 methods.
  • pull - we can use pull command to get an image
$ docker image pull nginx
  • create a container - when we create a container from an image, it downloads the image from the repository if it is not available in the host.
$ docker container run nginx

Docker command structure

There are many commands. So Docker has grouped them together into a common format.
docker container ls

Useful command summary

  • docker image ls - list available images
  • docker image rm - remove an image
  • docker image inspect - inspect an image
  • docker container run - run a container
  • docker container ls - list containers
  • docker container stop - stop a running container

Dry run

Let's test these commands with nginx.
$ docker container run --name my-nginx -p 80:80 nginx
$ docker image ls
$ docker container ls
Visit localhost in your browser. You should see nginx is running.
$ docker container inspect my-nginx
$ docker image inspect nginx

Now let's stop our container
$ docker container stop my-nginx
$ docker container ls
$ docker container ls -a

Let's start it again
$ docker container start my-nginx

Let's remove our container completely.
$ docker container stop my-nginx
$ docker container remove my-nginx

Let's remove image as well.
$ docker image remove nginx

Docker - Introduction


Let's talk about Docker container ;)

The Problem

Software packaging, distribution, installation is not that easy. It is true that there are easy to use software packages. Normally software depends on other libraries. To install a software it needs to install those dependencies first. What if those libraries have other dependencies? What if there are version conflicts?

Let's see a picture of a software installation.

It is a web of libraries. Now imagine we need to uninstall our software. Will it remove it's dependencies properly. Will that have an impact on other software? How do we install another version? What if you need to main multiple computers with this same setup?

There are many questions though we can somehow solve. Imagine time, energy we spend on these. Is it worth?

Lets say we want install MySQL as the database. Why do we need to spend lot of time for that when our main task is something else. These are reasons we need to find other way to software distribution.

  • Difficult to install.
  • Hard to maintain.
  • Difficult to uninstall.
  • Difficult to test other versions.
  • Difficult to distribute.

What are the solutions

  • Virtualization: People use virtual machines to ship their software. While this solve most of above problems it has it's own issues. It's a machine inside machine which means waste resources.
  • Containers: While containers look like virtual machine, it's not. Containers are isolated from the host system like virtual machines but it shares same host resources reducing replication cost leads to performance boost.

Software installation with Docker container


Docker is a command line program. A background daemon. Docker simplify container creation. Docker is a tool for container simplification. We only need to give few instructions. Then Docker daemon handle all the heavy work for us.

Docker is a platform for developers and sysadmins to develop, deploy, and run applications with containers. The use of Linux containers to deploy applications is called containerization. Containers are not new, but their use for easily deploying applications is.


  • Flexible - Even the most complex applications can be containerized.
  • Lightweight - Containers leverage and share the host kernel.
  • Interchangeable - You can deploy updates and upgrades on-the-fly.
  • Portable - You can build locally, deploy to the cloud, and run anywhere.
  • Scalable - You can increase and automatically distribute container replicas.
  • Stackable - You can stack services vertically and on-the-fly.

Hello World

You can refer the official guide for the Docker installation. For example, for ubuntu https://docs.docker.com/install/linux/docker-ce/ubuntu/.
Make sure Docker is running.

$ docker version
$ docker info
Docker officially has a hello-world image. Let's run it.
$ docker container run hello-world

Monday, April 1, 2019

Kotlin - Control Flow - when


Just like if, when also an expression. It has 2 forms.
  • With a value - behave as a switch operator.
  • Without a value - behave as if-else-if chain.

when as a switch


private void dayOfWeek(int dayOfWeek) {
    switch (dayOfWeek) {
        case 1:
        case 2:
        case 3:
        case 4:
        case 5:
        case 6:
        case 7:
            System.out.println("Invalid day");


private fun dayOfWeek(dayOfWeek: Int) {
    when (dayOfWeek) {
        1 -> println("Sunday")
        2 -> println("Monday")
        3 -> println("Tuesday")
        4 -> println("Wednesday")
        5 -> println("Thursday")
        6 -> println("Friday")
        7 -> println("Saturday")
        else -> println("Invalid Day")

How pretty is that?

Combine multiple branches

private fun whatDay(dayOfWeek: Int) {
    when (dayOfWeek) {
        2, 3, 4, 5, 6 -> println("Weekday")
        1, 7 -> println("Weekend")
        else -> println("Invalid Day")

Using in operator

private fun examResult(marks: Int) {
    when (marks) {
        in 1..60 -> println("You failed")
        in 60..100 -> println("You passed")
        else -> println("Invalid number")

when as an if-else-if


private static void printType(int number) {
    if (number < 0) {
        System.out.println("Negative number");
    } else if (number % 2 == 0) {
        System.out.println("Even number");
    } else {
        System.out.println("Positive odd number");


private fun printType(number: Int) {
    when {
        number < 0 -> println("Negative number")
        number % 2 == 0 -> println("Even number")
        else -> println("Positive odd number")

when as an expression

We can return or assign the value of when expression.
private fun racer(speed: Int): String {
    return when {
        speed in 0..24 -> "Beginner"
        speed in 24..30 -> "Intermediate"
        speed in 30..41 -> "Average"
        speed > 41 -> "Pro"
        else -> "Invalid speed"

Kotiln - Control Flow - If


If is the most basic way to control flow in Kotlin. Unlike Java, in Kotlin if is an expression. That is it return a value.
  • Statement - is a program instruction that return no value. Can't be on right side of the equal sign.
  • Expression - is a program instruction that return values. Can be assign to a variable.


private void findMax(int a, int b) {
    int max;
    if (a > b) {
        max = a;
    } else {
        max = b;
    System.out.println("Max value is " + max);

Kotlin (traditional statement)

private fun findMax(a: Int, b: Int) {
    val max: Int
    if (a > b) {
        max = a
    } else {
        max = b
    println("Max value is $max")

Kotlin (as an expression)

private fun findMax(a: Int, b: Int) {
    val max: Int = if (a > b) {
    } else {
    println("Max value is $max")

Ternary Operator

Kotlin doesn't have a ternary operator. It's because result of if, else can be assigned to a variable.

private void findMax2(int a, int b) {
    String result = (a > b) ?  a + " is greater than " + b : b + "is greater than " + a;

private fun findMax2(a: Int, b: Int) {
    val result = if (a > b) "$a is greater than $b" else " $b is greater than $a"

Return it

private fun directReturn(age: Int): String {
    return if (age < 21) "You are a kid" else "You are an adult"

Sunday, March 31, 2019

Kotlin - Variables and Type Inference


Unlike Java, Kotlin uses special keywords to declare variables.
  • var - for the values that change, mutable.
  • val - for the values that do not change, immutable.

String name = "Java";
int age = 20;

val name = "Kotlin"
val age = 4

It is best practice to use val because immutability guarantees safety.

We can use variable type when we declare it. If we initialize it, we can remove the type. But if we do not initialize, we should declare the variable type.
val name = "Jon Snow"

val role: String
role = "King in the north"


If the value is a truly constant we can const keyword when declaring. Though this needs to use companion object.
const val THE_END = "All men must die"

Type inference

Kotlin is a strongly typed language. Compiler can deduce the type by the context eliminating boilerplate code. Compiler is smart enough to identify the variable type.
val helpUs = "Save Wilpattu"
val pi = 3.14


It will print String and Double.

Kotlin - Introduction


I'm going to talk about Kotlin as a Java developer. There are many languages looming and Kotlin is one of them. Since it is another JVM based language it will be easier to grasp.

Why another language?

Java is more than 20 years old mature, widely used language. The problem of being old is it lacks modern techniques. Even though Java adapted to functional programming with Java 8, I believe it is somewhat late for the game.
Considering modern day language features, difficulties they have faced using Java, Jetbrains created Kotlin. Being the one of the best IDE providers they know what they are doing. With InteliJ Idea Kotlin has the best IDE support with it.

What is Kotlin

Kotlin is a cross-platform, statically typed, general-purpose programming language with type inference. Kotlin is designed to interoperate fully with Java, and the JVM version of its standard library depends on the Java Class Library, but type inference allows its syntax to be more concise.

Java disadvantages

  • Lack modern day programming features.
  • Disappointed/ forced exception handling.
  • Boilerplate codes.
  • Unnecessary getter and setters.

Kotlin advantages

  • Modern features.
  • 100% compatible with Java.
  • Interoperable, leverage existing libraries for the JVM, Android and browser.
  • Better exception handling (specially null pointers).
  • Concise, clean easy to read code.

Kotlin = Java + modern features

Hello World

package com.slmanju.blog;

public class Hello {

    public static void main(String[] args) {
        System.out.println("Hello World");


package com.slmanju.blog

fun main(args: Array<String>) {
    println("Hello World")

As you can see, Kotin is a beauty.