Spring Boot

Spring Boot Microservices Interview Questions

Test your microservices knowledge with 20 interview questions covering service discovery, API gateway, circuit breaker, distributed tracing, event-driven architecture, and Spring Cloud.

20 Questions
30 min
Mixed Difficulty
Start Quiz

Topics Covered

ArchitectureService DiscoveryAPI GatewayCircuit BreakerInter-Service CommunicationDistributed TracingSaga PatternCQRSEvent-Driven ArchitectureConfig Server12-Factor AppContainerization

Difficulty Breakdown

5
Junior
8
Mid-Level
7
Senior

What to Expect

  • Multiple choice questions with 4 options each
  • Instant score and topic-by-topic breakdown
  • Detailed explanations for every question
  • Personalized course recommendations based on your weak areas

Spring Boot Microservices Interview Questions and Answers

Below are all 20 questions covered in this quiz, grouped by topic. Each question includes the correct answer and a detailed explanation to help you prepare for your next interview.

2Architecture

Q

What is the main difference between a monolithic architecture and a microservices architecture?

A

A monolith deploys as a single unit whereas microservices are independently deployable services each owning a specific business capability

A monolithic application is built and deployed as a single unit where all modules share the same process and database. Microservices decompose the application into small, autonomous services that are independently deployable, scalable, and each responsible for a specific business domain. Microservices communicate over the network via APIs or messaging.

Q

What is the Strangler Fig pattern, and how do you use it to migrate from a monolith to microservices?

A

It incrementally replaces parts of a monolith by routing specific functionality to new microservices while the monolith continues to handle the rest, until the monolith is fully replaced

The Strangler Fig pattern (named after the strangler fig tree that grows around a host tree) is a migration strategy for incrementally decomposing a monolith into microservices. You place a routing layer (often an API Gateway) in front of the monolith, then gradually extract functionality into new microservices and reroute traffic to them. The monolith shrinks over time until it can be decommissioned entirely. This avoids the risk and complexity of a big-bang rewrite and allows teams to deliver value incrementally.

2Service Discovery

Q

What is service discovery, and why is it needed in a microservices architecture?

A

It allows services to find each other dynamically at runtime instead of relying on hard-coded network locations

In a microservices environment, service instances are created and destroyed dynamically (auto-scaling, rolling deployments). Service discovery (e.g., Netflix Eureka, Consul, Kubernetes DNS) maintains a registry of available service instances so that consumers can locate them at runtime without hard-coding URLs.

Q

What role does Netflix Eureka play in a Spring Cloud microservices stack?

A

It acts as a service registry where microservices register themselves and discover other services

Netflix Eureka is a service registry. Each microservice registers itself with the Eureka server on startup and sends periodic heartbeats. Other services query Eureka to obtain the list of available instances for a given service name. Spring Cloud Netflix provides the @EnableEurekaServer and @EnableEurekaClient annotations to integrate Eureka easily.

2API Gateway

Q

What is an API Gateway, and what are its responsibilities in a microservices system?

A

It is a single entry point that routes client requests to the appropriate microservices and handles cross-cutting concerns like authentication, rate limiting, and load balancing

An API Gateway (e.g., Spring Cloud Gateway, Kong, AWS API Gateway) acts as a reverse proxy sitting between clients and microservices. It centralizes cross-cutting concerns such as authentication, SSL termination, rate limiting, request routing, load balancing, response caching, and request/response transformation. This simplifies client logic and keeps services focused on business logic.

Q

How does Spring Cloud Gateway differ from the older Zuul 1.x proxy?

A

Spring Cloud Gateway is built on Project Reactor and Netty, providing a non-blocking, reactive model, whereas Zuul 1.x uses a blocking servlet-based approach

Spring Cloud Gateway is the recommended replacement for Zuul 1.x in the Spring ecosystem. It is built on Spring WebFlux, Project Reactor, and Netty, providing a fully non-blocking, reactive architecture. This makes it better suited for high-throughput, low-latency scenarios. It supports route predicates, filters, and integrates natively with Spring Boot auto-configuration.

2Circuit Breaker

Q

What is the Circuit Breaker pattern, and how does Resilience4j implement it?

A

A resilience pattern that monitors failures and temporarily stops calling a failing service; Resilience4j tracks failure rates and transitions between closed, open, and half-open states

The Circuit Breaker pattern prevents cascading failures. Resilience4j implements it with three states: Closed (requests flow normally, failures are counted), Open (requests fail immediately without calling the downstream service), and Half-Open (a limited number of test requests are allowed to check if the service has recovered). You configure failure rate thresholds, wait durations, and permitted calls in half-open state via @CircuitBreaker annotation or programmatic API.

Q

What is the Bulkhead pattern, and how does Resilience4j implement it?

A

A resilience pattern that isolates resources so that a failure in one component does not exhaust resources needed by others; Resilience4j provides semaphore-based and thread-pool-based bulkhead implementations

The Bulkhead pattern is inspired by ship compartments. It isolates different parts of a system so that a failure in one does not consume all available resources (threads, connections) and bring down the entire service. Resilience4j provides two bulkhead types: SemaphoreBulkhead limits the number of concurrent calls to a particular operation, and ThreadPoolBulkhead executes calls in a dedicated thread pool. This prevents a slow downstream service from monopolizing all threads in the calling service.

2Inter-Service Communication

Q

What is the difference between synchronous and asynchronous inter-service communication?

A

Synchronous communication (REST/gRPC) blocks the caller until a response is received; asynchronous communication (messaging via Kafka/RabbitMQ) decouples the sender and receiver in time

Synchronous communication (e.g., REST over HTTP, gRPC) means the calling service waits for the response before proceeding, creating temporal coupling. Asynchronous communication (e.g., Apache Kafka, RabbitMQ) sends messages to a broker; the producer does not wait for the consumer to process the message. Async communication improves resilience and scalability but adds complexity around eventual consistency and message ordering.

Q

How does Spring Boot support gRPC for inter-service communication, and when would you choose gRPC over REST?

A

gRPC uses HTTP/2 and Protocol Buffers for fast, type-safe, binary communication; choose it over REST when you need high-performance, low-latency inter-service calls with strong contracts and bidirectional streaming

gRPC is a high-performance RPC framework using HTTP/2 (multiplexed streams, header compression) and Protocol Buffers (efficient binary serialization with strong typing). Spring Boot integrates with gRPC via libraries like grpc-spring-boot-starter. Choose gRPC over REST for internal service-to-service communication when you need low latency, bidirectional streaming, or strict API contracts. REST remains a better choice for public-facing APIs due to broader client support and human-readable JSON payloads.

1Distributed Tracing

Q

What is distributed tracing, and how do Micrometer Tracing and Zipkin help in a microservices system?

A

It tracks the flow of a request across multiple services using trace and span IDs; Micrometer Tracing instruments the code and Zipkin collects, stores, and visualizes the traces

Distributed tracing assigns a unique trace ID to each incoming request and propagates it across all services involved. Each service creates spans representing its local work. Micrometer Tracing (formerly Spring Cloud Sleuth) auto-instruments Spring Boot applications by injecting trace and span IDs into logs and HTTP headers. Zipkin collects these spans and provides a UI to visualize the full request path, latencies, and bottlenecks across services.

2Saga Pattern

Q

What is the Saga pattern, and when would you use it?

A

A pattern for managing distributed transactions across multiple microservices by executing a sequence of local transactions with compensating actions for rollback

In microservices, traditional two-phase commit (2PC) across services is impractical. The Saga pattern breaks a distributed transaction into a sequence of local transactions, each within a single service. If one step fails, compensating transactions are executed to undo the work of previous steps. Sagas can be implemented via choreography (events) or orchestration (a central coordinator). Use sagas when a business process spans multiple services that each own their data.

Q

What is the difference between choreography-based and orchestration-based sagas?

A

In choreography, each service publishes events and reacts to others autonomously; in orchestration, a central saga orchestrator directs each step explicitly

Choreography-based sagas have no central controller. Each service listens for events from other services and decides what to do next, publishing its own events. This is simpler but can become hard to follow as the number of steps grows. Orchestration-based sagas use a central orchestrator service that explicitly tells each participant what to do and handles compensating logic. Orchestration is easier to understand and debug for complex workflows but introduces a single point of coordination.

1CQRS

Q

What is CQRS (Command Query Responsibility Segregation), and why is it useful in microservices?

A

CQRS separates the write model (commands) from the read model (queries), allowing each to be optimized, scaled, and evolved independently

CQRS splits your application into two sides: the command side handles create, update, and delete operations and may use a normalized relational model, while the query side handles reads and may use denormalized views or a different database optimized for read performance. This separation allows independent scaling (read replicas for queries, write-optimized stores for commands), different consistency models, and cleaner domain logic. In microservices, CQRS is often combined with event sourcing to synchronize the read and write models via events.

3Event-Driven Architecture

Q

How does Apache Kafka enable event-driven architecture in microservices?

A

Kafka acts as a distributed, durable, high-throughput event streaming platform where services publish and subscribe to topic-based event streams, enabling loose coupling and eventual consistency

Apache Kafka is a distributed event streaming platform. Producers publish events (messages) to topics, and consumers subscribe to topics to process events. Kafka stores events durably on disk with configurable retention, supports partitioning for parallel consumption, and provides exactly-once semantics. In microservices, Kafka enables event-driven communication: services react to events instead of being called directly, reducing temporal coupling and improving resilience and scalability.

Q

What is the difference between RabbitMQ and Apache Kafka for inter-service messaging?

A

RabbitMQ is a traditional message broker optimized for routing and per-message acknowledgment; Kafka is a distributed event log optimized for high-throughput streaming and event replay

RabbitMQ is a message broker following the AMQP protocol. It excels at complex routing (exchanges, bindings, queues), per-message acknowledgment, and transient message delivery. Kafka is a distributed log: messages are appended to partitioned, replicated logs and retained for a configurable period. Kafka excels at high-throughput event streaming, event replay, and serving as a durable event store. Choose RabbitMQ for task queues and complex routing; choose Kafka for event sourcing, stream processing, and high-volume data pipelines.

Q

What is event sourcing, and how does it relate to microservices and CQRS?

A

Event sourcing stores every state change as an immutable event; the current state is derived by replaying events. It complements CQRS by providing the event stream that synchronizes the write and read models

Event sourcing persists the state of an entity as a sequence of immutable domain events rather than storing only the current state. To get the current state, you replay the events from the beginning. This provides a complete audit trail, enables temporal queries, and allows rebuilding state from scratch. In microservices, event sourcing pairs naturally with CQRS: the command side appends events to the event store, and the query side consumes those events to build optimized read projections. Frameworks like Axon Framework provide event sourcing support for Spring Boot.

1Config Server

Q

What is Spring Cloud Config Server, and what problem does it solve?

A

It provides centralized, externalized configuration management for all microservices, backed by a Git repository or vault, with support for environment-specific overrides and runtime refresh

Spring Cloud Config Server centralizes configuration for all microservices in a single place, typically a Git repository. Each service fetches its configuration on startup via the Config Server. This solves the problem of managing application.yml files across dozens of services and environments. It supports profiles (dev, staging, prod), encryption of sensitive values, and dynamic refresh via Spring Cloud Bus or /actuator/refresh endpoint without redeploying the service.

112-Factor App

Q

What are the 12-Factor App principles, and why are they relevant to microservices?

A

They are a methodology for building cloud-native, scalable SaaS applications covering codebase, dependencies, config, backing services, build/release/run, processes, port binding, concurrency, disposability, dev/prod parity, logs, and admin processes

The 12-Factor App methodology (by Heroku) defines best practices for building cloud-native applications. Key principles include: store config in environment variables (not code), treat backing services as attached resources, export services via port binding, keep dev/prod environments as similar as possible, and treat logs as event streams. These principles align perfectly with microservices because they promote stateless, disposable processes that can be independently deployed, scaled, and managed in containerized environments.

1Containerization

Q

How do you containerize a Spring Boot microservice using Docker, and what is a multi-stage build?

A

A multi-stage Docker build uses one stage to build the JAR with Maven/Gradle and a second stage with a slim JRE image to run it, resulting in a smaller, more secure production image

A multi-stage Dockerfile uses a builder stage (e.g., maven:3-eclipse-temurin-21) to compile the application and produce the JAR, then copies only the JAR into a minimal runtime stage (e.g., eclipse-temurin:21-jre-alpine). This produces a much smaller image (typically 200-300MB vs 800MB+) by excluding build tools, source code, and intermediate artifacts. Spring Boot's layered JARs feature can further optimize Docker builds by separating dependencies into cacheable layers.

Ready to test yourself?

Take the interactive quiz and get your score with a personalized topic breakdown.

Start the Quiz

Your Career Transformation Starts Now

Join thousands of developers mastering in-demand skills with Amigoscode. Try it free today.