← Back to Home

Code-Alongs

Code-Along 1: Stacks and Queues

This code-along will guide you through implementing stacks and queues in Java, focusing on their operations, use cases, and implementation details.

What You'll Learn

  • Practical implementation of Stack and Queue data structures in Java
  • Understanding LIFO (Last-In-First-Out) and FIFO (First-In-First-Out) principles
  • Time and space complexity analysis of stack and queue operations
  • Real-world applications of stacks and queues
  • Common implementation patterns and best practices

Stacks are used when you need to handle data in a Last-In-First-Out manner, making them perfect for scenarios like tracking undo operations, managing function calls, and parsing expressions. Queues shine in First-In-First-Out scenarios like print job management, task scheduling, and breadth-first search algorithms.

During this code-along, you'll implement both data structures and see how their distinct behaviors serve different purposes in application design. You'll also learn how to choose between Java's built-in implementations depending on your specific requirements.

Code Highlights

// Stack example - tracking application state
Stack<String> undoStack = new Stack<>();
undoStack.push("action1");
undoStack.push("action2");

// Last action performed can be undone first
String lastAction = undoStack.pop(); // Returns "action2"

// Queue example - processing tasks in order
Queue<PrintJob> printQueue = new LinkedList<>();
printQueue.add(new PrintJob("doc1.pdf"));
printQueue.add(new PrintJob("doc2.pdf"));

// First job added will be processed first
PrintJob nextJob = printQueue.remove(); // Returns doc1.pdf job

Code-Along 2: In-Memory Caching

This code-along will guide you through implementing an in-memory caching solution to improve application performance, focusing on caching strategies and implementation techniques.

What You'll Learn

  • Designing and implementing a custom in-memory cache in Java
  • Handling cache hits, misses, and evictions effectively
  • Implementing time-based expiration with TTL (Time-To-Live)
  • Choosing appropriate eviction policies based on use cases
  • Testing and optimizing cache performance

In-memory caching is a powerful technique for improving application performance by storing frequently accessed data in memory. This approach significantly reduces the need to perform expensive operations like database queries, API calls, or complex calculations repeatedly.

During this code-along, you'll build a custom caching solution that balances memory usage with performance gains. You'll learn to predict cache behavior and make informed decisions about cache size and TTL settings based on application requirements.

Code Highlights

// Basic cache implementation with TTL
public class SimpleCache<K, V> {
    private final Map<K, CacheEntry<V>> cache = new HashMap<>();
    private final long defaultTtlMillis;
    
    // Cache hit example
    public V get(K key) {
        CacheEntry<V> entry = cache.get(key);
        if (entry == null) {
            return null; // Cache miss
        }
        if (entry.isExpired()) {
            cache.remove(key);
            return null; // Expired entry
        }
        return entry.getValue(); // Cache hit
    }
    
    // Cache miss handling
    public V getOrCompute(K key, Function<K, V> computeFunction) {
        V value = get(key);
        if (value == null) {
            value = computeFunction.apply(key);
            put(key, value);
        }
        return value;
    }
}