← Back to concepts

Frontend & Web Platform Concepts

One-paragraph explanations of key concepts.

Hydration

Hydration is the process where client-side JavaScript attaches event listeners and state to HTML that was rendered on the server, making the page interactive without re-rendering the entire DOM. The goal is to reuse existing markup to reduce load time while enabling full client behavior.

Partial hydration

Partial hydration delays or skips hydrating parts of the DOM that are not immediately interactive, reducing JavaScript execution cost. Only critical or user-interacted components are hydrated, improving startup performance.

Islands architecture

Islands architecture renders most of the page as static HTML while isolating interactive components ("islands") that hydrate independently. This minimizes JavaScript usage and scales interactivity more efficiently.

Streaming SSR

Streaming SSR sends HTML to the browser incrementally as it's generated, allowing the browser to begin parsing and rendering before the full response is complete. This improves time-to-first-paint and perceived performance.

Concurrent rendering

Concurrent rendering allows rendering work to be paused, resumed, or abandoned to keep the UI responsive. It prioritizes user interactions over non-urgent updates.

Time slicing

Time slicing breaks rendering work into small chunks so the browser can handle user input between them. This prevents long blocking renders that cause jank.

Reconciliation algorithm

The reconciliation algorithm determines how UI state changes translate into DOM updates by comparing previous and next render trees. Efficient reconciliation minimizes unnecessary DOM mutations.

// Keys tell React which items are stable across re-renders const list = items.map(item => ( <li key={item.id}>{item.name}</li> )); // Without keys, reordering causes unnecessary DOM mutations.

Fiber architecture

Fiber is a reimplementation of the reconciliation engine that represents rendering work as a linked data structure, enabling interruption, prioritization, and concurrency.

Virtual DOM diffing complexity

Virtual DOM diffing typically runs in linear time by comparing trees level-by-level and assuming stable structure. Keys help avoid expensive reordering operations.

Example: O(n) diff assumes tree shape is similar; reordering a list without keys can cause O(n²) behavior because the algorithm may match by position instead of identity.

Structural sharing

Structural sharing reuses unchanged parts of data structures between versions, reducing memory usage and enabling cheap comparisons. It is common in immutable state systems.

Immutable data patterns

Immutable data patterns avoid mutating existing objects and instead create new versions. This simplifies change detection, undo/redo, and concurrency safety.

// Instead of mutating: // state.items.push(newItem); // Do: setState({ ...state, items: [...state.items, newItem] });

Referential equality

Referential equality checks whether two references point to the same object in memory. Many optimizations rely on stable references to skip work.

Memoization pitfalls

Memoization can backfire if dependencies are unstable or if memory usage grows unchecked. Incorrect dependency tracking can cause stale or incorrect results.

Example: React compares dependency arrays by reference (Object.is). If you put an object literal in the deps, e.g. { id: b }, a new object is created on every render, so the deps are always "different" and useMemo recomputes every time—giving you no benefit and extra overhead.

// BAD: new object every render → memo never helps useMemo(() => compute(a), [a, { id: b }]); // GOOD: stable deps useMemo(() => compute(a), [a, b]);

Stale closure problem

Stale closures occur when a function captures variables from its surrounding scope at the time it was created, and later runs with those same (now outdated) values instead of the current ones. In JavaScript, a closure "closes over" the variables it references; it keeps the binding to whatever value they had when the function was created. In React, each render gets its own `count`, `props`, etc. If you start a timer or subscription in an effect with an empty dependency array, the callback you pass closes over the state from that first render only. When the timer fires later, it still sees the old value, so updates like setCount(count + 1) repeatedly apply to the same stale number.

Example: Step by step: (1) First render: count is 0. The effect runs once and sets an interval; the callback is "setCount(0 + 1)". (2) One second later the callback runs and sets count to 1. React re-renders with count = 1. (3) The effect has deps [], so it does not run again—the same interval is still active. That interval's callback was created in the first render and still closes over count = 0. (4) Every second the callback runs setCount(0 + 1) again, so you keep setting 1 and the display never goes to 2. The closure is "stale" because it's holding on to the old count. Fix: use the functional form setCount(c => c + 1) so you always use the latest state, or include count in the effect deps so the effect (and the interval callback) are recreated when count changes.

function Counter() { const [count, setCount] = useState(0); useEffect(() => { const id = setInterval(() => { setCount(count + 1); // Always sees initial 0 → count stuck at 1 }, 1000); return () => clearInterval(id); }, []); // Missing count in deps return <span>{count}</span>; } // Fix 1: functional update — no closure over count setCount((c) => c + 1); // Fix 2: add count to deps — effect and interval recreated when count changes

Event loop (macro vs microtasks)

The event loop processes macrotasks (timers, I/O) and microtasks (Promises) in separate queues, with microtasks always running before rendering and the next macrotask. Understanding it is important because it determines the order in which your async code runs: microtasks (e.g. Promise.then, queueMicrotask) always run to completion before the next macrotask (setTimeout, I/O, UI events), and before the browser paints. That explains why "1, 4, 3, 2" happens in the classic example—and why long synchronous work or a long chain of microtasks can block the main thread, delay input handling, and cause jank. When debugging timing bugs, race conditions, or "why did my callback run after X?", the event loop model tells you what runs when.

Example: Without this model, setTimeout(fn, 0) and Promise.resolve().then(fn) look similar ("run later"), but they are scheduled differently: the microtask runs before the next render and before any pending timeouts. That affects when state updates appear on screen, when network responses are processed relative to user clicks, and why heavy work in a .then() can freeze the UI until it finishes.

console.log(1); setTimeout(() => console.log(2), 0); Promise.resolve().then(() => console.log(3)); console.log(4); // Output: 1, 4, 3, 2

Task starvation

Task starvation occurs when high-priority or frequent tasks prevent lower-priority tasks from ever running, leading to delayed work or frozen UI.

Layout thrashing

Layout thrashing happens when reads and writes to layout-dependent properties are interleaved, forcing repeated synchronous reflows and hurting performance.

// BAD: read, write, read, write → one reflow per element for (const el of elements) { const h = el.offsetHeight; // read (reflow) el.style.height = h + 10 + "px"; // write } // BETTER: batch all reads, then all writes → minimal reflows const heights = elements.map((el) => el.offsetHeight); elements.forEach((el, i) => { el.style.height = heights[i] + 10 + "px"; });

Critical rendering path

The critical rendering path is the sequence of steps required to convert HTML, CSS, and JS into pixels. Shortening it improves initial render speed.

Render blocking resources

Render blocking resources, such as synchronous CSS or JS, delay painting until they are loaded and executed. Optimizing or deferring them improves performance.

Tree shaking internals

Tree shaking removes unused exports by analyzing static module graphs at build time. It relies on ES module semantics and side-effect detection.

Code splitting strategies

Code splitting divides bundles into smaller chunks loaded on demand. Strategies include route-based, component-based, and interaction-based splitting.

Dynamic import chunking

Dynamic imports create separate chunks that load asynchronously. Chunk boundaries affect caching, waterfall behavior, and performance. Advantages: smaller initial bundle and faster time-to-interactive, since heavy or route-specific code is deferred; better long-term caching because changing one feature only invalidates its chunk; and users who never visit a feature never download it. Disadvantages: the first time a chunk is needed there is a network request and a delay before the code runs (waterfall), which can cause a loading flash or delayed interactivity; over-splitting creates many small requests and can increase total latency; and shared dependencies may be duplicated across chunks or require careful configuration to avoid loading the same library multiple times.

Example: Preload or prefetch key chunks (e.g. for the next likely route) to hide latency; keep critical path in the main bundle and chunk by route or heavy component; monitor chunk sizes and request count so you don’t trade one bottleneck for another.

// Chunk loaded when this runs const Editor = lazy(() => import("./Editor"));

Module federation

Module federation allows multiple builds to share runtime-loaded modules, enabling independent deployment of micro-frontends without duplication.

Shadow DOM

Shadow DOM provides DOM encapsulation, preventing styles and structure from leaking in or out. It enables reusable, isolated components.

Example: Use Shadow DOM when you need style or DOM isolation: (1) Building Web Components or a design system that must look the same regardless of host page CSS. (2) Embedding widgets (e.g. chat, video player, payment form) that should not be affected by global styles or accidentally affect the page. (3) Third-party or user-generated content where you want to scope styles so they don’t clash with the app. (4) Reusable UI that will be dropped into many sites and you can’t control their stylesheets. Avoid it when you’re fully in control of the page and want shared theming, or when you need the inner DOM to be easily styled from outside (e.g. with utility classes or CSS variables).

// Page has: p { color: blue; } globally. // Without shadow: your component's <p> would turn blue (page CSS wins). const host = document.getElementById("host"); const shadow = host.attachShadow({ mode: "open" }); shadow.innerHTML = ` <style>p { color: red; }</style> <p>I stay red</p> `; // Benefit: the shadow's <p> is only styled by the <style> inside the shadow. // Page's "p { color: blue }" does not apply here; our "p { color: red }" does not affect the rest of the page.

Custom Elements lifecycle

Custom Elements define lifecycle callbacks such as connectedCallback and disconnectedCallback, allowing logic to run when elements are attached or removed.

class MyEl extends HTMLElement { connectedCallback() { /* added to DOM */ } disconnectedCallback() { /* removed; clean up */ } attributeChangedCallback() { /* attr change */ } } customElements.define("my-el", MyEl);

Web Components interoperability

Web Components interoperate across frameworks because they rely on browser standards. Frameworks must handle event and property mapping carefully.

Web Workers vs Service Workers

Web Workers run background scripts for computation, while Service Workers act as network proxies and lifecycle-managed offline handlers.

Example: Web Worker: offload CPU-heavy work (parsing, crypto, image processing) so the main thread stays responsive. Service Worker: intercept fetch requests to cache responses, serve offline, or rewrite URLs; runs independently of any tab.

// === Web Worker (computation off main thread) === // main.js const worker = new Worker("/worker.js"); worker.postMessage({ data: bigArray }); worker.onmessage = (e) => console.log("Result:", e.data); // worker.js self.onmessage = (e) => { const result = heavyComputation(e.data.data); self.postMessage(result); }; // === Service Worker (network proxy / offline) === // register navigator.serviceWorker.register("/sw.js"); // sw.js self.addEventListener("fetch", (event) => { event.respondWith( caches.match(event.request).then((cached) => cached || fetch(event.request)) ); });

SharedArrayBuffer

SharedArrayBuffer enables shared memory between threads, allowing true parallelism. It requires cross-origin isolation for security reasons.

Transferable objects

Transferable objects move ownership of memory between threads without copying, enabling fast data transfer between workers.

OffscreenCanvas

OffscreenCanvas allows rendering to occur in a worker thread, offloading graphics work from the main thread.

WebAssembly integration

WebAssembly integrates low-level, high-performance code into web apps, often used for compute-heavy tasks while JS handles orchestration.

References:MDN: WebAssembly

Browser compositing layers

Compositing layers separate rendering into independently composited surfaces, enabling GPU-accelerated transforms and animations.

Paint vs composite vs layout

Layout calculates geometry, paint fills pixels, and composite assembles layers. Each stage has different performance costs.

GPU acceleration in CSS

GPU acceleration offloads transforms and opacity changes to the GPU, improving animation smoothness when used correctly.

Example: Promoting a layer: transform: translateZ(0) or will-change: transform. Avoid overusing will-change (memory cost).

CSS containment

CSS containment limits layout, paint, or style scope of an element, reducing rendering work and improving performance.

/* Isolate layout/paint so descendants don’t affect outside */ .card { contain: layout paint; }

Subpixel rendering

Subpixel rendering improves text sharpness by leveraging RGB subpixels, but can be affected by transforms and compositing.

IntersectionObserver internals

IntersectionObserver batches visibility calculations asynchronously, avoiding costly scroll handlers and synchronous layout reads.

const observer = new IntersectionObserver((entries) => { entries.forEach(e => { if (e.isIntersecting) loadLazyContent(e.target); }); }, { rootMargin: "50px" }); document.querySelectorAll(".lazy").forEach(el => observer.observe(el));

ResizeObserver loop limits

ResizeObserver detects element size changes but enforces loop limits to prevent infinite resize-triggered feedback cycles.

MutationObserver cost

MutationObservers are asynchronous but can be expensive if observing large subtrees or frequent mutations.

IndexedDB

IndexedDB is a transactional, asynchronous key-value database built into browsers for structured client-side storage.

Service Worker lifecycle traps

Service Worker updates are delayed until old versions are unused, which can cause stale logic if lifecycle rules are misunderstood.

Example: New SW installs but waits in "waiting"; it only activates when all tabs using the old SW are closed, or skipWaiting() is called during install.

Cache invalidation strategies

Cache invalidation ensures users receive fresh content through versioning, revalidation, or explicit eviction.

Stale-while-revalidate

Stale-while-revalidate serves cached content immediately while fetching updates in the background for future requests.

Example: Cache-Control: max-age=60, stale-while-revalidate=300 — serve from cache for 60s, then allow stale for 300s while revalidating.

ETag vs Cache-Control

ETags enable validation via conditional requests, while Cache-Control defines freshness rules and caching behavior.

// Response: ETag: "abc123", Cache-Control: max-age=3600 // Next request: If-None-Match: "abc123" → 304 Not Modified

HTTP/3 and QUIC

HTTP/3 runs over QUIC, reducing latency by eliminating head-of-line blocking and improving connection migration.

References:HTTP/3 (web.dev)

Priority hints

Priority hints let developers signal resource importance to the browser, improving scheduling of network requests.

Preload vs Prefetch vs Preconnect

Preload fetches critical resources immediately, prefetch loads likely future resources, and preconnect establishes early connections.

<link rel="preload" href="critical.css" as="style"> <link rel="prefetch" href="next-page.js"> <link rel="preconnect" href="https://api.example.com">

CORS preflight

CORS preflight requests verify cross-origin permissions before sending certain requests, adding latency when misconfigured.

Example: Preflight (OPTIONS) is sent when the request uses non-simple methods/headers (e.g. PUT, custom headers). Server must respond with Access-Control-Allow-* that match the actual request.

References:MDN: CORS

CSRF vs XSS mitigation

CSRF mitigation focuses on request authenticity, while XSS mitigation focuses on preventing script injection and execution.

Content Security Policy (CSP)

CSP restricts allowed sources of scripts, styles, and other resources, reducing XSS attack surface.

Content-Security-Policy: default-src 'self'; script-src 'self' https://trusted.cdn.com;
References:MDN: CSP

Trusted Types

Trusted Types prevent DOM XSS by enforcing safe object creation for dangerous sinks like innerHTML.

// Policy creates TrustedHTML; innerHTML only accepts it const policy = trustedTypes.createPolicy("default", { createHTML: (input) => DOMPurify.sanitize(input) }); element.innerHTML = policy.createHTML(userInput);

DOM clobbering

DOM clobbering exploits name collisions between DOM elements and JS properties, potentially leading to security issues.

// If HTML has <form id="x"><input name="y"></form>, // then x.y can refer to the input element, clobbering expected JS.

Prototype pollution

Prototype pollution injects properties into object prototypes, potentially altering application behavior globally.

// Malicious input: __proto__.isAdmin = true // Merged into object: obj.isAdmin may become true for all objects.

Race conditions in UI state

Race conditions occur when async state updates resolve out of order, causing inconsistent UI.

// User clicks "Load A", then "Load B"; response B arrives first. // Without cancellation/ignoring stale responses, A overwrites B. // Fix: AbortController, request ID, or ignore outdated setState.

Tearing in concurrent UI

Tearing happens when different parts of the UI read inconsistent state during concurrent rendering.

Scheduler priorities

Schedulers assign priority levels to tasks, ensuring urgent interactions are handled before background work.

Render waterfalls

Render waterfalls occur when sequential resource dependencies delay rendering. Parallelization reduces them.

Suspense boundaries

Suspense boundaries define loading fallbacks and isolate async rendering delays to specific UI regions.

<Suspense fallback={<Spinner />}> <LazyComponent /> </Suspense>
References:React: Suspense

Selective hydration

Selective hydration hydrates components only when needed, often triggered by visibility or interaction.

Server components

Server components render on the server and never ship JS to the client, reducing bundle size and improving performance.

Edge rendering

Edge rendering executes rendering logic close to users geographically, reducing latency.

Micro-frontend orchestration

Micro-frontend orchestration coordinates multiple independently deployed frontend units into a cohesive app.

Finite state modeling

Finite state modeling represents UI behavior as explicit states and transitions, reducing ambiguity and bugs.

Event sourcing in frontend

Event sourcing stores state changes as a sequence of events, enabling replay and debugging.

Optimistic UI rollback strategy

Optimistic UI updates assume success and roll back state if operations fail, improving perceived responsiveness.

Example: On submit: update UI immediately, then revert and show error if the request fails.

Offline conflict resolution

Offline conflict resolution reconciles divergent state when connectivity is restored, often using timestamps or merges.

CRDT basics for collaboration

CRDTs are data structures that resolve conflicts automatically, enabling real-time collaboration without central coordination.

References:YjsAutomerge

WebRTC

WebRTC enables peer-to-peer audio, video, and data communication directly in browsers.

References:MDN: WebRTC

Backpressure in streams API

Backpressure prevents producers from overwhelming consumers by signaling demand through the stream pipeline.

AbortController

AbortController provides a standard way to cancel async operations like fetch requests.

const controller = new AbortController(); fetch(url, { signal: controller.signal }) .then(r => r.json()) .catch(e => e.name === "AbortError" && "cancelled"); // Later: controller.abort();

Streaming fetch response handling

Streaming fetch allows incremental consumption of response bodies, reducing memory usage and latency.

const res = await fetch(url); const reader = res.body.getReader(); while (true) { const { done, value } = await reader.read(); if (done) break; processChunk(value); }

Browser memory leak detection

Memory leak detection involves profiling heap snapshots and tracking retained objects over time.

Detached DOM nodes

Detached DOM nodes remain in memory after removal if references persist, causing leaks.

const node = document.getElementById("old"); document.body.removeChild(node); // If a closure or global still holds `node`, it stays in memory.

Garbage collection timing

Garbage collection is non-deterministic and can introduce pauses, impacting performance if allocations are excessive.

PerformanceObserver API

PerformanceObserver collects performance metrics asynchronously, enabling real-time monitoring.

Long tasks API

The Long Tasks API identifies main-thread tasks exceeding 50ms, helping diagnose jank.

First Input Delay (FID)

FID measures the delay between first user interaction and browser response, reflecting interactivity.

References:FID (web.dev)

Interaction to Next Paint (INP)

INP measures responsiveness across all interactions, replacing FID as a more comprehensive metric.

References:INP (web.dev)

Cumulative Layout Shift (CLS)

CLS measures unexpected layout movement, impacting visual stability.

References:CLS (web.dev)

Largest Contentful Paint (LCP)

LCP measures when the largest visible element finishes rendering, reflecting load performance.

References:LCP (web.dev)

Speculative prerendering

Speculative prerendering loads and renders pages in advance based on predicted navigation.

Priority inversion in async code

Priority inversion occurs when low-priority async work blocks high-priority tasks indirectly.

Deterministic rendering

Deterministic rendering ensures the same inputs always produce the same UI output.

Idempotent UI actions

Idempotent UI actions can be repeated without changing the final result, simplifying retries.

Accessibility tree

The accessibility tree represents semantic UI information exposed to assistive technologies.

ARIA live regions internals

ARIA live regions notify assistive technologies of dynamic content changes asynchronously.

<div aria-live="polite" aria-atomic="true"> {message} </div>

Pointer events

Pointer events unify mouse, touch, and pen input under a single event model.