Skip to main content

Posts

Showing posts from 2020

Kafka: Introduction to core concepts

Apache Kafka was developed by LinkedIn and donated to Apache. Apache Kafka is a distributed streaming platform that can handle high volume of data.
Pull or Push?I initially misunderstood Kafka as a push based messaging system. However Kafka has chosen traditional pull approach. In Kafka, data is pushed to the broker by producers and pulled from the broker by the consumers.
IMAGE1
Why Kafka?Kafka is a reliable messaging system which is fast and durable. We can list it's benifits as;Scalable - Kafka's partion model allows data to distributed across multipel servers, making it highly scalable. Durable - Kafka's data is written to disk making it highly durable agaisnt server failures.Multiple producers - Kafka can handle multpile producers which publish to the same topic.Multiple consumers - Kafka is designed so that multipel consumers can read messages without interfering with each other.High performance - All these features allows high performace distributed messaging system.
What…

Getting started with Kafka

This is a quick guide to set up Kafka environment for local development and learn Kafka. I use Kafka Udemy course and documentation for this. Setup and configuring Kafka echo system may be a boring task. However with Docker and Landoop (now they are Lenses) it is as easy as running a docker command.
Note: You need Docker installed to follow this post.

Get landoop docker imageWhen you have setup docker in your environment you can pull landoop docker image. $ docker pull landoop/fast-data-dev Start the Kafka brokerI'm going to run the docker container in interactive mode. $ docker container run --rm --name my-kafka-broker -it \ -p 2181:2181 -p 3030:3030 -p 8081:8081 \ -p 8082:8082 -p 8083:8083 -p 9092:9092 \ -e ADV_HOST=127.0.0.1 \ landoop/fast-data-dev This will bring up all necessary tools to work with Kafka. After about 1 minute you can access landoop's UI console from http://127.0.0.1:3030/.

If you scroll down, you can see running serv…

Hexagonal Architecture

Invented by Alistair Cockburn, Hexagonal Architecture is one of many ways to design loosely coupled applications. This is also known as Ports and Adapters pattern.
As domain is the king and every other layer should work around it, this solves the layer dependency by inverting them.

IntentAllow an application to equally be driven by users, programs, automated test or batch scripts, and to be developed and tested in isolation from its eventual run-time devices and databases. (Alistair Cockburn)
PrincipleHexagonal architecture divided the system into manageable loosely coupled components centering application core domain. As the domain is at the center, all other layers such as web, database are directing at it. This is achieved by ports and adapters (hence the name). Each outer layers is connected through ports by implementing them as adapters. Thus core domain is fully independent of changes. This approach is an alternative to traditional layered architecture.
Domain - sit center of the lay…

Domain Driven Design (part 1)

We all have heard of Domain Driven Design and we all (including myself) are struggling to use it. Let's try to understand few things together.
Domain Driven Design is first coined by Eric Evans in his book which is a concept to structure projects using the business domain.
Domain is the King.
While we can arrange our structure in varies ways, there is a common factor in every project, "business domain". No matter, project domain is the reason why the project exists. It is clear that domain is the most valuable thing. In his book, Eric Evans describes how we can build our projects centering the domain.
Fair enough, for small projects this may be over engineering.
In his book, he mainly focuses on the domain layer in our system.

Domain -  Domain is the real world problem we are going to solve in our application. Domain refers to the specific subject that the project is being developed for. The subject area to which the user applies a program is the domain of the software.Domain M…

Message Broker

When we are building systems with multiple components, we need a way to communicate between components. One failed attempt will be direct communication between components. In this solution, components are higly depending on each other.


A better solution will be to use a centralized middleman. All necessary components register with middleman. Middleman recieve all the requests where it find corresponding service to send it.

A broker can:Register, unregister services.Locate services.Recieve and send messages.Error handling.
ProductsThere are many broker systems available. These are just few of them. Apache ActiveMQ, Apache Kafka, RabbitMQ, Websphere ESB, JBoss ESB, Amazon MQ
AdvantagesLoose coupling between components.Scalable and maintainable as long as the interface remain the same.Components are reusable.
DisadvantagesIntroduce single point of failure.Degrade performance due to additional routing.
https://en.wikipedia.org/wiki/Message_broker

Pipes and Filters Architecture

In Linux, when we want to combine results of varies commands to filer out our desired result we use pipe. We feed output of one program as the input of the next. This is fine example of Pipes and Filters pattern.
$ ps aux | grep java
Pipes and filters is a very helpful architectural design pattern for stream of data. Also, it is helpfull when there are data transformations through the sequence.
Source code for the example application.


It consists of number of components called Filters that transform data before handing over to next Filter via connectros called Pipes. Filter - transforms or filters data it receives via the pipe.Pipe - is the connector that passes data from and to filters.Pump - is the data producer.Sink - is the end consumer of the target data. A pipeline consists of a chain of processing elements, arranged so that the output of each element is the input of the next (Wikipedia).
Filter can be a small class as well as a big component. Input, output can be decided by the projec…

Plugin Architechture

If you have used IDE like Ecllipse you may have noticed that you can extend the functionality by adding plugins. Eclipse is an extensible platform. We can add new tools using pluggable components called Eclipse plug-ins. We can create our own plugin by using Eclipse plug-in model.
When we develop softwares we also want to create our applications extensible and modular. We can use plugin architecture to fulfill our needs.

What is a Plug-in?A plug-in (or plugin, add-in, addin, add-on, or addon) is a software component that adds a specific feature to an existing computer program. (Wikipedia)
In plugin architecture, core modules may not know all existing features, instead we plug them at loading time. We are taking advantage of separation of concerns. Plugins is a fine example of open closed principle where we are extending existing functionality without changing main logic. AdvantagesExtensible - plugin can be developed outside the main program and extend the behaviour without touching it.In…

Tail of list.remove()

When I was checking a code I saw this suspicous line,
list.remove(object) However, I'm in differenct team hence this is not related to me. Anyway, I informed the developer, who is a Lead, probably senior to me. Since there team is OK with this and it is working for their use case (but not that safe) I have nothing to do but share the knowledge in my blog.

If you google how to remove a value form a list you sure will hit List.remove(E element). Question is, are you using it correctly?

Working scenario Let's check this out with a String first.
  @Test
  public void testStringListRemove() {
    List<String> list = new ArrayList<>();
    list.add("AB");
    list.add("CD");
    list.add("EF");
    list.remove("CD");
    Assertions.assertEquals(2, list.size());
  } Test is passing. Does that mean i'm wrong. No, Java String uses a pool hence "CD" is same object. This is completely correct.

Failing Scenario Now let's see this …

Dependency Inversion Principle (DIP)

D of the SOLID principle is Dependency Inversion Principle also known as DIP.

This is one of those very powerful principle to build highly-decoupled well structured software.



What is Dependency Inversion Principle? The dependency inversion principle is a specific form of decoupling software modules. When following this principle, the conventional dependency relationships established from high-level, policy-setting modules to low-level, dependency modules are reversed, thus rendering high-level modules independent of the low-level module implementation details. (wikipedia)
Depend on abstractions, not on concretions.
Robert C. Martin’s definition of the Dependency Inversion Principle consists of two parts:
High-level modules should not depend on low-level modules. Both should depend on abstractions.Abstractions should not depend on details. Details should depend on abstractions.



The idea here is that high-level module should not depend on low-level modules. Both should only depend on the a…

Layered Architecture

When our project start growing, eventually we end up where to put my code delima. It is quite hard organize the code if we are not careful.

This is when we can use well known three-tier architecture. It helps us to divide the codebase into three separate layers with clear distinctive responsibilities.


But why?Before taking any architectural decision we better ask the question ourself. Why do we need to do this. It is all about simplification. End of the day we need not only a working system, but we need a highly maintainable system.
Layer isolation gives us possibility to separate the concerns with clear responsibilities. But why, we can do our changes without inpacting other layers. Any new business requirement will have very small change. There will be zero breaking changes.
For an example we should never do any SQL operation in presentation layer.

We can build, deploy our layers separately.

Introducing our layersPresentation layer - represents user interface.Business logic layer - all …