Master Java Streams API with 20 interview questions covering stream operations, collectors, parallel streams, flatMap, reduce, groupingBy, and performance considerations.
Below are all 20 questions covered in this quiz, grouped by topic. Each question includes the correct answer and a detailed explanation to help you prepare for your next interview.
What is the output of the following code?
List.of(1, 2, 3, 4, 5).stream()
.filter(n -> n > 2)
.findFirst()
.orElse(0);3
The filter keeps elements greater than 2, resulting in a stream of [3, 4, 5]. findFirst() returns an Optional containing the first element, which is 3. orElse(0) unwraps the Optional, returning 3 since it is present.
What is the output of this reduce operation?
int result = List.of(1, 2, 3, 4, 5).stream()
.reduce(0, (a, b) -> a + b);15
reduce(0, (a, b) -> a + b) starts with an identity value of 0, then accumulates by adding each element. The computation is: 0+1=1, 1+2=3, 3+3=6, 6+4=10, 10+5=15. This is equivalent to Integer::sum or IntStream's sum(). The first argument to reduce is the identity value — the starting point that is also returned if the stream is empty.
What does this code return?
boolean result = List.of(2, 4, 6, 8, 9).stream()
.allMatch(n -> n % 2 == 0);
boolean result2 = List.of(1, 3, 5, 7).stream()
.anyMatch(n -> n % 2 == 0);
boolean result3 = List.of(1, 3, 5, 7).stream()
.noneMatch(n -> n % 2 == 0);false, false, true
allMatch() returns true only if every element satisfies the predicate — 9 is odd, so result is false. anyMatch() returns true if at least one element matches — no element in [1,3,5,7] is even, so result2 is false. noneMatch() returns true only if no element matches — no element in [1,3,5,7] is even, so result3 is true. All three are short-circuiting terminal operations: they stop processing as soon as the result is determined.
Which of the following correctly creates a Stream in Java?
All of the above
Streams can be created in multiple ways: Stream.of() for varargs, Arrays.stream() for arrays, and the .stream() method on any Collection. Additionally, you can use Stream.generate(), Stream.iterate(), or IntStream.range() for other creation patterns.
What is the key difference between intermediate and terminal operations in a Stream pipeline?
Intermediate operations return a new Stream and are lazy; terminal operations trigger execution and produce a result or side-effect
Intermediate operations (map, filter, sorted, etc.) are lazy — they are not executed until a terminal operation (collect, forEach, reduce, count, etc.) is invoked. Intermediate operations return a new Stream, allowing chaining. Streams never modify the source collection.
What will this code print?
List.of(1, 2, 3, 4, 5).stream()
.filter(n -> {
System.out.println("filter: " + n);
return n % 2 == 0;
})
.map(n -> {
System.out.println("map: " + n);
return n * 10;
})
.findFirst();filter: 1, filter: 2, map: 2
Streams process elements one at a time through the entire pipeline (not one operation at a time across all elements). Element 1 enters filter, fails, so it stops. Element 2 enters filter, passes, then enters map, producing 20. Since findFirst() is a short-circuiting terminal operation, the stream stops after finding the first matching element. This demonstrates both lazy evaluation and short-circuit behavior.
What will this code print?
Stream.of("a", "b", "c", "a", "b")
.distinct()
.count();3
distinct() removes duplicate elements from the stream based on equals(). The stream ["a", "b", "c", "a", "b"] becomes ["a", "b", "c"] after distinct(). count() returns the number of elements, which is 3.
What does the peek() operation do, and when should you use it?
List.of(1, 2, 3).stream()
.peek(System.out::println)
.map(n -> n * 2)
.collect(Collectors.toList());peek() is an intermediate operation that performs an action on each element without modifying it, mainly used for debugging
peek() is an intermediate operation that accepts a Consumer and performs an action on each element as it passes through the stream, without modifying it. It is primarily intended for debugging purposes — for example, logging elements at a certain point in the pipeline. It should not be used for side-effects in production code, as it may not be called for all elements in short-circuiting or parallel pipelines.
What is the result of this code?
List<List<String>> nested = List.of(
List.of("a", "b"),
List.of("c", "d"),
List.of("e")
);
nested.stream()
.flatMap(Collection::stream)
.collect(Collectors.toList());["a", "b", "c", "d", "e"]
flatMap() takes each element (which is itself a List<String>), converts it to a stream via Collection::stream, and then flattens all those streams into a single stream. The result is a flat list of all strings: ["a", "b", "c", "d", "e"]. This is the classic use case for flatMap — flattening nested structures into a single level.
What does this code produce?
Map<Boolean, List<Integer>> result =
List.of(1, 2, 3, 4, 5, 6).stream()
.collect(Collectors.partitioningBy(n -> n % 2 == 0));{false=[1, 3, 5], true=[2, 4, 6]}
Collectors.partitioningBy() always returns a Map<Boolean, List<T>> with exactly two entries: true for elements matching the predicate and false for those that don't. Unlike groupingBy, partitioningBy always produces both keys even if one group is empty. Here, even numbers go to true and odd numbers go to false.
What is the output?
Map<String, List<String>> result =
List.of("apple", "avocado", "banana", "blueberry", "cherry").stream()
.collect(Collectors.groupingBy(
s -> s.substring(0, 1)
));{a=[apple, avocado], b=[banana, blueberry], c=[cherry]}
Collectors.groupingBy() groups stream elements by a classifier function. Here, the classifier extracts the first character of each string. Elements sharing the same first letter are grouped into the same list. The result is a Map where keys are first letters and values are lists of matching strings.
What does Collectors.toMap() do when there are duplicate keys?
List.of("apple", "avocado", "banana").stream()
.collect(Collectors.toMap(
s -> s.charAt(0),
s -> s
));It throws an IllegalStateException for duplicate keys
By default, Collectors.toMap() throws an IllegalStateException when it encounters duplicate keys. To handle duplicates, you must provide a merge function as the third argument, e.g., Collectors.toMap(s -> s.charAt(0), s -> s, (v1, v2) -> v1) to keep the first value, or (v1, v2) -> v2 to keep the last.
What does this code produce?
String result = List.of("Java", "Streams", "API").stream()
.collect(Collectors.joining(", ", "[", "]"));[Java, Streams, API]
Collectors.joining() concatenates stream elements into a single String. The three-argument version takes a delimiter (", "), a prefix ("["), and a suffix ("]"). Elements are joined with the delimiter, then wrapped with the prefix and suffix, producing "[Java, Streams, API]".
What does this downstream collector produce?
Map<String, Long> result =
List.of("apple", "avocado", "banana", "blueberry", "cherry").stream()
.collect(Collectors.groupingBy(
s -> s.substring(0, 1),
Collectors.counting()
));{a=2, b=2, c=1}
The two-argument version of Collectors.groupingBy() accepts a classifier function and a downstream collector. Instead of collecting into lists (the default), Collectors.counting() counts the elements in each group. So 'a' has 2 elements (apple, avocado), 'b' has 2 (banana, blueberry), and 'c' has 1 (cherry). Other useful downstream collectors include summingInt(), averagingDouble(), mapping(), and toSet().
What happens when you try to reuse a stream?
Stream<String> stream = List.of("a", "b").stream();
stream.forEach(System.out::println);
stream.forEach(System.out::println);It throws an IllegalStateException on the second forEach
Streams in Java can only be consumed once. After a terminal operation (like forEach) has been invoked, the stream is considered closed. Attempting to use it again throws an IllegalStateException with the message "stream has already been operated upon or closed". To process the data again, you must create a new stream from the source.
Which statement about parallel streams is correct?
List.of(1, 2, 3, 4, 5).parallelStream()
.map(n -> n * 2)
.collect(Collectors.toList());Parallel streams use the common ForkJoinPool by default and may hurt performance for small datasets or I/O-bound tasks
Parallel streams use the common ForkJoinPool and split work across multiple threads. However, they come with overhead (thread coordination, data splitting, merging). For small datasets, simple operations, or I/O-bound work, sequential streams are often faster. Parallel streams do not guarantee encounter order unless you use forEachOrdered() or the collector preserves it. They are best suited for CPU-intensive operations on large datasets.
What is the problem with this code?
List<Integer> list = new ArrayList<>();
List.of(1, 2, 3, 4, 5).parallelStream()
.map(n -> n * 2)
.forEach(list::add);ArrayList is not thread-safe, so concurrent modifications may cause lost updates or ArrayIndexOutOfBoundsException
Using forEach to add elements to a non-thread-safe collection from a parallel stream is a classic concurrency bug. Multiple threads may simultaneously call list.add(), causing race conditions, lost updates, or ArrayIndexOutOfBoundsException. The correct approach is to use .collect(Collectors.toList()), which handles thread safety internally using a combiner. Never use side-effects with parallel streams — use collectors instead.
What is wrong with this reduce operation in a parallel stream?
int result = List.of(1, 2, 3, 4).parallelStream()
.reduce(10, (a, b) -> a + b);The identity value 10 is added in each parallel partition, producing an incorrect result greater than 20
In a parallel stream, reduce() applies the identity value to each partition. With identity 10, if the stream is split into partitions like [1,2] and [3,4], the computation is (10+1+2) + (10+3+4) = 30, not the expected 20. The identity value for addition must be 0, not 10. The identity must satisfy: combiner(identity, x) == x for all x. This is a subtle but critical bug that only manifests in parallel execution.
What does this code do?
Stream.iterate(0, n -> n + 2)
.limit(5)
.collect(Collectors.toList());[0, 2, 4, 6, 8]
Stream.iterate(seed, unaryOperator) creates an infinite stream starting with 0 and applying n -> n + 2 repeatedly: 0, 2, 4, 6, 8, ... The limit(5) operation short-circuits the infinite stream, taking only the first 5 elements. Without limit(), this would indeed run forever. Stream.iterate() and Stream.generate() are the two primary ways to create infinite streams in Java.
What is the output of this code using Optional with streams?
Optional<String> result = List.of("ant", "bear", "cat", "dog").stream()
.filter(s -> s.length() > 3)
.findAny();
result.ifPresent(System.out::println);It always prints "bear"
The filter keeps elements with length > 3, which is only "bear" (length 4). findAny() returns an Optional containing a matching element. With a sequential stream, findAny() behaves like findFirst() and returns "bear". ifPresent() calls the consumer only if the Optional contains a value. Note: with parallelStream(), findAny() could return any matching element non-deterministically, but here only "bear" matches.
Take the interactive quiz and get your score with a personalized topic breakdown.
Start the Quiz20 questions · 30 min
Java20 questions · 30 min
Java20 questions · 30 min
Java20 questions · 30 min
Java20 questions · 30 min
Spring Boot20 questions · 30 min
Spring Boot20 questions · 30 min
Spring Boot20 questions · 30 min
Spring Boot20 questions · 30 min
Spring Boot20 questions · 30 min
Spring Boot20 questions · 30 min
DevOps20 questions · 30 min
Join thousands of developers mastering in-demand skills with Amigoscode. Try it free today.