🛠️ Prerequisites

Before getting started, ensure your system has the following:

Podman and Podman Compose
 installed (Installation Guide)
Confluent CLI (optional) – useful for managing Confluent services via the command line
Java 17 or 21 – required for running Confluent CLI
At least 6 GB of RAM allocated for containerized services

📦 OS-specific Installation Tips

macOS users are encouraged to install Podman directly from the official site
Windows users can follow this guide to set up Podman

⚙️ Simplest Way to Spin Up Kafka with Podman Compose

The quickest way to launch Kafka and its ecosystem locally is by using Podman Compose with a Compose file.


📝 Step 1: Create the Compose File

Create a file named docker-compose.yaml and paste the following configuration:


# docker-compose.yaml


services:


  broker:

    image: confluentinc/cp-kafka:7.8.0

    hostname: broker

    container_name: broker

    ports:

      - "9092:9092"

      - "9101:9101"

    environment:

      KAFKA_NODE_ID: 1

      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT'

      KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092'

      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0

      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1

      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1

      KAFKA_JMX_PORT: 9101

      KAFKA_JMX_HOSTNAME: localhost

      KAFKA_PROCESS_ROLES: 'broker,controller'

      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@broker:29093'

      KAFKA_LISTENERS: 'PLAINTEXT://broker:29092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:9092'

      KAFKA_INTER_BROKER_LISTENER_NAME: 'PLAINTEXT'

      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'

      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'

      CLUSTER_ID: 'MkU3OEVBNTcwNTJENDM2Qk'


  schema-registry:

    image: confluentinc/cp-schema-registry:7.8.0

    hostname: schema-registry

    container_name: schema-registry

    depends_on:

      - broker

    ports:

      - "8081:8081"

    environment:

      SCHEMA_REGISTRY_HOST_NAME: schema-registry

      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:29092'

      SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081


  connect:

    image: cnfldemos/cp-server-connect-datagen:0.6.4-7.6.0

    hostname: connect

    container_name: connect

    depends_on:

      - broker

      - schema-registry

    ports:

      - "8083:8083"

    environment:

      CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'

      CONNECT_REST_ADVERTISED_HOST_NAME: connect

      CONNECT_GROUP_ID: compose-connect-group

      CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs

      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1

      CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000

      CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets

      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1

      CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status

      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1

      CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter

      CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter

      CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081

      CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-7.8.0.jar

      CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"

      CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"

      CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"


  control-center:

    image: confluentinc/cp-enterprise-control-center:7.8.0

    hostname: control-center

    container_name: control-center

    depends_on:

      - broker

      - schema-registry

      - connect

      - ksqldb-server

    ports:

      - "9021:9021"

    environment:

      CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092'

      CONTROL_CENTER_CONNECT_CONNECT-DEFAULT_CLUSTER: 'connect:8083'

      CONTROL_CENTER_CONNECT_HEALTHCHECK_ENDPOINT: '/connectors'

      CONTROL_CENTER_KSQL_KSQLDB1_URL: "http://ksqldb-server:8088"

      CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: "http://localhost:8088"

      CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"

      CONTROL_CENTER_REPLICATION_FACTOR: 1

      CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1

      CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1

      CONFLUENT_METRICS_TOPIC_REPLICATION: 1

      PORT: 9021


  ksqldb-server:

    image: confluentinc/cp-ksqldb-server:7.8.0

    hostname: ksqldb-server

    container_name: ksqldb-server

    depends_on:

      - broker

      - connect

    ports:

      - "8088:8088"

    environment:

      KSQL_CONFIG_DIR: "/etc/ksql"

      KSQL_BOOTSTRAP_SERVERS: "broker:29092"

      KSQL_HOST_NAME: ksqldb-server

      KSQL_LISTENERS: "http://0.0.0.0:8088"

      KSQL_CACHE_MAX_BYTES_BUFFERING: 0

      KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"

      KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"

      KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"

      KSQL_KSQL_CONNECT_URL: "http://connect:8083"

      KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR: 1

      KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: 'true'

      KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: 'true'


  ksqldb-cli:

    image: confluentinc/cp-ksqldb-cli:7.8.0

    container_name: ksqldb-cli

    depends_on:

      - broker

      - connect

      - ksqldb-server

    entrypoint: /bin/sh

    tty: true


  ksql-datagen:

    image: confluentinc/ksqldb-examples:7.8.0

    hostname: ksql-datagen

    container_name: ksql-datagen

    depends_on:

      - ksqldb-server

      - broker

      - schema-registry

      - connect

    command: "bash -c 'echo Waiting for Kafka to be ready... && \

                       cub kafka-ready -b broker:29092 1 40 && \

                       echo Waiting for Confluent Schema Registry to be ready... && \

                       cub sr-ready schema-registry 8081 40 && \

                       echo Waiting a few seconds for topic creation to finish... && \

                       sleep 11 && \

                       tail -f /dev/null'"

    environment:

      KSQL_CONFIG_DIR: "/etc/ksql"

      STREAMS_BOOTSTRAP_SERVERS: broker:29092

      STREAMS_SCHEMA_REGISTRY_HOST: schema-registry

      STREAMS_SCHEMA_REGISTRY_PORT: 8081


  rest-proxy:

    image: confluentinc/cp-kafka-rest:7.8.0

    depends_on:

      - broker

      - schema-registry

    ports:

      - 8082:8082

    hostname: rest-proxy

    container_name: rest-proxy

    environment:

      KAFKA_REST_HOST_NAME: rest-proxy

      KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'

      KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"

      KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'

 

Explanation of the docker-compose.yaml (Podman Compose Compatible)
Confluent Platform local development environment

1. broker

Image: confluentinc/cp-kafka:7.8.0
Role: Kafka broker using KRaft (no ZooKeeper).

Key Env Config:

KAFKA_ADVERTISED_LISTENERS: Makes broker accessible both inside and outside the container.
CLUSTER_ID: Must be a valid base64 UUID, used for KRaft mode.
KAFKA_PROCESS_ROLES: Set to both broker and controller.

Ports:

9092: For clients
9101: JMX metrics (optional)

2. schema-registry

Image: confluentinc/cp-schema-registry:7.8.0
Role: Manages Avro/Protobuf/JSON schemas.
Depends on: broker
Port: 8081
Connects to Kafka via SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS.

3. connect


Image: cnfldemos/cp-server-connect-datagen
Role: Kafka Connect service pre-packaged with Datagen connector for demo data.
Depends on: broker, schema-registry
Port: 8083
Converts values using AvroConverter linked to the schema registry.

4. control-center


Image: confluentinc/cp-enterprise-control-center:7.8.0
Role: Web-based UI for managing Kafka clusters.
Depends on: broker, schema-registry, connect, ksqldb-server
Port: 9021
Monitors Kafka, Connect, KSQLDB, Schema Registry

5. ksqldb-server


Image:
 confluentinc/cp-ksqldb-server:7.8.0
Role: Stream processing engine using SQL-like syntax for Kafka topics.
Depends on: broker, connect
Port: 8088

6. ksqldb-cli


Image: confluentinc/cp-ksqldb-cli:7.8.0
Role: Command-line interface for interacting with KSQLDB.
This container is interactive (tty: true) and doesn’t expose ports.

7. ksql-datagen


Image: confluentinc/ksqldb-examples:7.8.0
Role: Prepares demo data for KSQLDB use.
Startup delay built in via cub scripts to ensure services are ready.
Runs indefinitely using tail -f /dev/null.

8. rest-proxy


Image: confluentinc/cp-kafka-rest:7.8.0
Role: Exposes a REST API for Kafka (useful for clients that can’t use Kafka protocol).
Port: 8082
Connects to both Kafka and Schema Registry.

🔁 Summary

This compose file sets up a fully local Confluent stack that includes:

  • Kafka broker (KRaft mode)
  • Schema Registry
  • Kafka Connect with Datagen
  • REST Proxy
  • KSQLDB (server and CLI)
  • Control Center UI

✅ Next Steps

Once your docker-compose.yaml file is ready: podman-compose up -d

That's it! You now have a full Confluent stack running locally using Podman.


🧪 Optional: Validate with Confluent CLI

If you've installed the Confluent CLI, you can run: confluent local services list

To ensure everything is operational.

The Criteria API is a predefined API used to define queries for entities. It is the alternative way of defining a JPQL query. These queries are type-safe, and portable and easy to modify by changing the syntax. Similar to JPQL it follows abstract schema (easy to edit schema) and embedded objects. The metadata API is mingled with criteria API to model persistent entity for criteria queries.
The major advantage of the criteria API is that errors can be detected earlier during compile time. String based JPQL queries and JPA criteria based queries are same in performance and efficiency.

History of criteria API

The criteria API is included into all versions of JPA therefore each step of criteria API is notified in the specifications of JPA.
  • In JPA 2.0, the criteria query API, standardization of queries are developed.
  • In JPA 2.1, Criteria update and delete (bulk update and delete) are included.

Criteria Query Structure

The Criteria API and the JPQL are closely related and are allowed to design using similar operators in their queries. It follows javax.persistence.criteria package to design a query. The query structure means the syntax criteria query.
The following simple criteria query returns all instances of the entity class in the data source.

EntityManager em = ...;
CriteriaBuilder cb = em.getCriteriaBuilder();

CriteriaQuery<Entity class> cq = cb.createQuery(Entity.class);
Root<Entity> from = cq.from(Entity.class);

cq.select(Entity);
TypedQuery<Entity> q = em.createQuery(cq);
List<Entity> allitems = q.getResultList();
The query demonstrates the basic steps to create a criteria.
  • EntityManager instance is used to create a CriteriaBuilder object.
  • CriteriaQuery instance is used to create a query object. This query object’s attributes will be modified with the details of the query.
  • CriteriaQuery.from method is called to set the query root.
  • CriteriaQuery.select is called to set the result list type.
  • TypedQuery<T> instance is used to prepare a query for execution and specifying the type of the query result.
  • getResultList method on the TypedQuery<T> object to execute a query. This query returns a collection of entities, the result is stored in a List.

The below video contains the building of a login system from the very basics using MERN technologies. For security purposes JWT will be used to validate the login of a user.



S.O.L.I.D is an acronym for the first five object-oriented design(OOD)** principles** by Robert C. Martin, popularly known as Uncle Bob.
These principles, when combined together, make it easy for a programmer to develop software that are easy to maintain and extend. They also make it easy for developers to avoid code smells, easily refactor code, and are also a part of the agile or adaptive software development.
  • S - Single-responsiblity principle
  • O - Open-closed principle
  • L - Liskov substitution principle
  • I - Interface segregation principle
  • D - Dependency Inversion Principle
A class should have one and only one reason to change, meaning that a class should have only one job.

Objects or entities should be open for extension, but closed for modification.

Let q(x) be a property provable about objects of x of type T. Then q(y) should be provable for objects y of type S where S is a subtype of T.

All this is stating is that every subclass/derived class should be substitutable for their base/parent class.

A client should never be forced to implement an interface that it doesn't use or clients shouldn't be forced to depend on methods they do not use.

Entities must depend on abstractions not on concretions. It states that the high level module must not depend on the low level module, but they should depend on abstractions.





Honestly, S.O.L.I.D might seem to be a handful at first, but with continuous usage and adherence to its guidelines, it becomes a part of you and your code which can easily be extended, modified, tested, and refactored without any problems.



Question 01

Create a function as a variable (function expression) that prints ‘Hello World’ to console and another function which accepts a variable. The argument passed to the second function should be executed as a function inside the body. Call the second function passing the first function as the argument. Check the output.


Question 02

 Declare a global variable named "vehicleName in the window object.

 Declare a method named printVehicleName to print out the vehicle name .

 Declare an object named Vehicle(using object literal notation) which have a variable called vehicleName and declare a function named getVehicleName and assign it with the printVehicleName  Execute the printVehicleName function and the getVehicleName functions to see the results.

Correct the getVehicleName to print out the global variable vehicleName using the this keyword.

Question 03

Create a separate function using JavaScript closure which accepts the tax percentage and returns a function which accepts the amount and returns the amount after adding tax percentage. Try adding tax percentage to

‘this’ object and check if it works.

Question 04

Write a function to call GitHub API (https://api.github.com/users) and get users and return the users to the caller



ANSWERS FOR THE ABOVE QUESTIONS ARE GIVEN BELOW













JavaScript is one of the 3 languages all web developers must learn:

   1. HTML to define the content of web pages
   2. CSS to specify the layout of web pages
   3. JavaScript to program the behavior of web pages

Web pages are not the only place where JavaScript is used. Many desktop and server programs use JavaScript. Node.js is the best known. Some databases, like MongoDB and CouchDB, also use JavaScript as their programming language.