As QA automation engineers, we're experts at breaking applications. We know how to craft edge cases, simulate network failures, and stress test APIs. But there's one type of failure we often miss in our test suites: the partial breakdown. When your frontend receives unexpected data from the backend, does your entire page crash, or does it gracefully degrade? Most automation tests focus on happy paths and complete failures, but production incidents usually fall somewhere in between. A field is missing. A data type changed. An array came back empty. These contract violations don't crash your backend, they crash your UI. This guide shows you how to test that your application handles these real-world failures gracefully, using the testing tools and techniques you already know.
If you've been testing web applications for any length of time, you've seen it: a user reports that the entire page went blank, but your test suite is green. Everything passed. No failed assertions. No caught exceptions. Yet somehow, in production, one bad API response took down the whole interface.
This happens because of how modern frontend frameworks handle errors. When a component encounters an error while rendering, it doesn't just fail gracefully by default. It unmounts itself and everything above it in the component tree. One broken widget destroys the entire page.
Error boundaries are the frontend's answer to this problem. Think of them as try-catch blocks, but for UI rendering instead of code execution. They catch errors in a specific section of the page and prevent those errors from destroying everything else.
Let me show you what this looks like from a testing perspective. Imagine you're testing an analytics dashboard with these sections:
Your backend team deploys a change to the user profile API. They renamed a field from userName to displayName. The frontend code expects userName, doesn't find it, and throws an error trying to render it.
Without error boundaries: The entire page turns blank. All six sections disappear. The user sees white screen. Your navigation is gone, so they can't even get back to the home page. They have to refresh the browser.
With error boundaries: Only the user profile section shows an error message like "Unable to load profile data." The navigation still works. The revenue chart still renders. The transactions table still loads. The user can continue working with five out of six features.
From a QA perspective, these are two completely different test scenarios. In the first case, you'd assert that the page failed to load. In the second case, you'd assert that most of the page loaded correctly and only one section showed an error state.
You don't need to understand framework internals to test error boundaries. You just need to verify observable behavior. Here's what a basic error boundary test checks:
// Pseudocode for a typical error boundary test
test('dashboard handles profile API failure gracefully', async () => {
// Simulate a contract violation
await interceptAPI('/api/user/profile', {
status: 200,
body: { displayName: 'John' } // Missing expected 'userName' field
});
// Navigate to the page
await page.goto('/dashboard');
// Assert: Error section is visible
await expect(page.locator('[data-testid="profile-error"]')).toBeVisible();
// Assert: Other sections still work
await expect(page.locator('[data-testid="revenue-chart"]')).toBeVisible();
await expect(page.locator('[data-testid="transactions-table"]')).toBeVisible();
await expect(page.locator('nav')).toBeVisible();
// Assert: Navigation still functions
await page.click('nav >> text=Settings');
await expect(page).toHaveURL('/settings');
});
Notice what we're testing here. We're not checking if an error was thrown in the JavaScript console. We're not verifying framework error boundary lifecycle methods. We're checking the user experience: does the error message appear, and do the other sections keep working?
Most QA automation focuses on binary outcomes. Either the page loads or it doesn't. Either the API returns 200 or it returns 500. Either the test passes or it fails.
Error boundaries introduce a third state: partial failure. The API might return 200 with unexpected data. The page might load with some sections broken and others working. Your test might need to assert both success and failure conditions simultaneously.
This is closer to how production actually fails. Your backend team won't usually deploy changes that return 500 errors (those get caught in their tests). They'll deploy changes that return 200 with slightly different data structures. A field gets renamed. A property becomes nullable. An array that always had items is now sometimes empty.
These are contract violations, not crashes. And they're what error boundaries are designed to handle.
Here's an example scenario. An e-commerce application had a product listing page with filters in a sidebar. The backend team added a new filter type and changed the filters API response from:
{
"filters": [
{ "id": "category", "label": "Category", "options": [...] },
{ "id": "price", "label": "Price Range", "options": [...] }
]
}
To:
{
"filters": [
{ "id": "category", "name": "Category", "options": [...] },
{ "id": "price", "name": "Price Range", "options": [...] }
]
}
Notice the change? label became name. The frontend code expected label, didn't find it, and crashed trying to render the filters.
Without error boundaries: The entire product page went blank. Users couldn't see products, couldn't search, couldn't navigate. Revenue dropped immediately.
With error boundaries: The filter sidebar showed an error message. The product grid still rendered. Search still worked. Users could browse products without filters. Revenue was impacted, but not destroyed.
The existing automation tests didn't catch this because they only tested the happy path where the API returned the expected structure. There were no tests that said "if the filters API returns unexpected data, the product grid should still work."
When you're testing error boundaries, you're verifying three things:
These are all behavioral assertions you can make without touching frontend code. You're testing the page as a user experiences it, not as a developer built it.
Most QA automation engineers have written tests that force errors. You mock a function to throw an exception. You stub an API to return a 500 status. You inject invalid data at the code level. The test fails as expected, you verify the error message appears, and you move on feeling confident that error handling works.
Here's the problem: these tests often pass while production burns.
I've seen it repeatedly. A well-tested application with 80% code coverage and comprehensive error handling tests still crashes in production when the backend returns unexpected data. The tests said everything was fine. The error boundaries were in place. But users saw blank screens anyway.
Why? Because we were testing the wrong kind of errors.
Let's look at a common testing pattern. You want to verify that your dashboard handles API failures gracefully, so you write something like this:
test('shows error message when API fails', async () => {
// Mock the API to throw an error
await page.route('/api/dashboard', route => {
throw new Error('API failed');
});
await page.goto('/dashboard');
await expect(page.locator('.error-message')).toBeVisible();
});
This test will likely fail before it even gets to your assertion. The route handler itself throws an error, which often causes the test runner to crash or the network interception to malfunction. You're forcing an error at the wrong layer.
Even if you fix it by returning a proper error response:
test('shows error message when API fails', async () => {
await page.route('/api/dashboard', route => {
route.fulfill({ status: 500, body: 'Internal Server Error' });
});
await page.goto('/dashboard');
await expect(page.locator('.error-message')).toBeVisible();
});
This is better, but it's still not how production fails. When was the last time your backend returned an unhandled 500 error? Your backend team has error handling too. Their tests catch these scenarios. What they don't catch are the subtle contract violations.
Here are some production incidents where error boundaries should have helped:
Incident 1: The Optional FieldA backend developer changed a required field to optional. The API still returned 200. The response was valid JSON. But the frontend assumed the field would always exist.
// Old response
{
"user": {
"id": 123,
"name": "John Doe",
"email": "john@example.com"
}
}
// New response (email is now optional)
{
"user": {
"id": 123,
"name": "John Doe"
}
}
The frontend tried to render user.email.toLowerCase() and crashed with "Cannot read property 'toLowerCase' of undefined." The entire profile page went blank.
The existing tests all mocked the API with complete data. They never tested the scenario where optional fields were actually missing.
Incident 2: The Type ChangeA price field changed from a number to a string to support multiple currencies.
// Old response
{ "price": 29.99 }
// New response
{ "price": "$29.99 USD" }
The frontend code did math operations on the price: price * quantity. JavaScript didn't throw an error. It just returned NaN. The shopping cart showed "$NaN USD" for the total. Users couldn't check out.
The tests mocked prices as numbers because that's what the contract said. Nobody tested what happens when the backend violates that contract.
Incident 3: The Empty ArrayA recommendations API always returned at least one item in testing, so the frontend code assumed the array would never be empty.
// Frontend code assumed recommendations[0] always exists
const topRecommendation = recommendations[0];
return {topRecommendation.title};
In production, for new users with no browsing history, the array was empty. The page crashed trying to access recommendations[0].title.
The test data always included recommendations. Nobody thought to test the empty state.
Many test suites include checks like this:
test('no console errors during checkout flow', async () => {
const consoleErrors = [];
page.on('console', msg => {
if (msg.type() === 'error') {
consoleErrors.push(msg.text());
}
});
await completeCheckoutFlow();
expect(consoleErrors).toHaveLength(0);
});
This seems reasonable. Errors are bad. Tests that produce errors should fail. But this creates two problems:
Problem 1: Silent Failures PassConsider code that catches and swallows errors:
// Component silently handles errors
try {
const data = await fetchData();
renderData(data);
} catch (error) {
// Error caught, nothing logged
showLoadingSpinner(); // User stuck forever
}
No console error is logged. Your assertion passes. But the user sees an infinite loading spinner. The UI is completely broken.
Problem 2: Proper Error Handling FailsNow consider code that handles errors correctly but logs them for monitoring:
// Error boundary logs for observability
componentDidCatch(error, errorInfo) {
console.error('Boundary caught:', error);
reportToMonitoring(error);
this.setState({ hasError: true }); // Shows fallback UI
}
An error is logged to the console. Your assertion fails. But the user sees a helpful error message, can still navigate the site, and the error was reported to your monitoring system. This is exactly what you want to happen.
The console error check failed a good implementation and passed a broken one.
When you force errors in your test code, you tightly couple your tests to implementation details. Here's an example:
test('handles user fetch error', async () => {
// Mock the fetchUser function to throw
await page.evaluate(() => {
window.fetchUser = () => Promise.reject(new Error('Failed'));
});
await page.goto('/profile');
await expect(page.locator('.error-message')).toBeVisible();
});
This test knows too much about how the page works internally. It knows there's a fetchUser function. It knows that function is called on page load. It knows the error should trigger a specific error message element.
Now the developer refactors. They rename fetchUser to getUserData. They change when it's called. They update the error message styling. Your test breaks, but the actual error handling still works perfectly.
You're testing implementation, not behavior.
The question isn't "does this code throw an error?" The question is "when the backend sends unexpected data, does the UI handle it gracefully?"
That's a completely different test:
test('handles missing email field in profile', async () => {
// Intercept the API and return valid JSON with missing field
await page.route('/api/user/profile', route => {
route.fulfill({
status: 200,
contentType: 'application/json',
body: JSON.stringify({
user: {
id: 123,
name: 'John Doe'
// email field missing
}
})
});
});
await page.goto('/profile');
// The profile section should show an error
await expect(page.locator('[data-testid="profile-error"]')).toBeVisible();
// But navigation should still work
await expect(page.locator('nav')).toBeVisible();
await page.click('nav >> text=Settings');
await expect(page).toHaveURL('/settings');
});
This test:
Here's the trap: tests that force errors give you confidence that error handling code exists. But they don't tell you if that error handling works for the failures that actually happen in production.
You can have 100% coverage of your error handling code and still have users experiencing blank screens. Your tests verify that when you call throwError(), an error appears. They don't verify that when the backend changes userName to displayName, your application handles it gracefully.
This is why typical error testing approaches fall short. They test artificial failures instead of real contract violations. They assert on technical signals like console output instead of user experience. They couple tightly to implementation instead of testing behavior.
An API contract is an agreement between your frontend and backend about what data will be exchanged. The backend promises to send data in a specific shape, and the frontend promises to handle it. When either side breaks that promise, things fall apart.
From a QA perspective, you don't need to read OpenAPI specs or understand schema validators to test contracts. You just need to know: what does the frontend expect, and what happens when it gets something different?
Here are the most common contract breaks that trigger error boundaries in production:
Missing Required FieldsThe frontend expects user.email, but the backend doesn't send it:
// Expected
{ "user": { "id": 123, "name": "John", "email": "john@example.com" } }
// Actually received
{ "user": { "id": 123, "name": "John" } }
The frontend expects a number, gets a string:
// Expected
{ "price": 29.99, "quantity": 5 }
// Actually received
{ "price": "29.99", "quantity": "5" }
The frontend expects an object, gets null:
// Expected
{ "profile": { "avatar": "url", "bio": "text" } }
// Actually received
{ "profile": null }
The frontend assumes the array has items:
// Expected (and assumed to have at least one item)
{ "recommendations": [{ "id": 1, "title": "Product A" }] }
// Actually received
{ "recommendations": [] }
These are all valid JSON responses with 200 status codes. Your backend tests pass. Your contract might even say these fields are optional. But the frontend crashes anyway.
The key is intercepting the API response and modifying it before it reaches the frontend. Here's how with different tools:
Playwright
test('handles missing email field', async ({ page }) => {
await page.route('/api/user', route => {
route.fulfill({
status: 200,
contentType: 'application/json',
body: JSON.stringify({
user: {
id: 123,
name: 'John Doe'
// email intentionally missing
}
})
});
});
await page.goto('/profile');
await expect(page.locator('[data-testid="profile-error"]')).toBeVisible();
});
it('handles wrong data type for price', () => {
cy.intercept('GET', '/api/product/*', {
statusCode: 200,
body: {
id: 456,
name: 'Widget',
price: '29.99' // String instead of number
}
});
cy.visit('/product/456');
cy.get('[data-testid="product-error"]').should('be.visible');
});
import { rest } from 'msw';
import { setupServer } from 'msw/node';
const server = setupServer(
rest.get('/api/recommendations', (req, res, ctx) => {
return res(
ctx.status(200),
ctx.json({
recommendations: [] // Empty array
})
);
})
);
test('handles empty recommendations', async () => {
server.listen();
render( );
await screen.findByText(/no recommendations/i);
expect(screen.queryByTestId('recommendation-item')).not.toBeInTheDocument();
server.close();
});
You don't need to test every possible contract violation. Focus on:
Start with one or two critical user flows. Test what happens when the most important APIs return unexpected data. A checkout flow with bad price data. A dashboard with missing user info. A product page with null inventory.
Every contract violation test follows the same pattern:
// 1. Intercept the API
await page.route('/api/endpoint', route => {
route.fulfill({
status: 200,
body: JSON.stringify({
// 2. Return valid JSON with contract violation
// Missing field, wrong type, null value, empty array, etc.
})
});
});
// 3. Trigger the flow that calls this API
await page.goto('/some-page');
await page.click('some-button');
// 4. Assert the error boundary caught it
await expect(page.locator('.error-fallback')).toBeVisible();
// 5. Assert other sections still work
await expect(page.locator('nav')).toBeVisible();
await expect(page.locator('.unaffected-section')).toBeVisible();
The beauty of this approach is that you're not testing code, you're testing data. You're simulating what the backend actually sends when contracts break. These are the failures your users encounter in production, and now you're catching them in your test suite.
Once you've simulated a contract violation, what should you actually verify? This is where many QA engineers default to checking that an error occurred. But that's not enough. Resilient behavior means the application continues to function despite the error.
Every error boundary test should verify three things:
1. The error is visible to the user
// User sees a clear error message, not a blank screen
await expect(page.locator('[data-testid="profile-error"]')).toBeVisible();
await expect(page.locator('[data-testid="profile-error"]'))
.toContainText('Unable to load profile');
// The broken component is contained
await expect(page.locator('[data-testid="user-profile"]'))
.not.toBeVisible();
// But the error boundary fallback shows instead
await expect(page.locator('[data-testid="profile-error"]'))
.toBeVisible();
// Navigation is functional
await expect(page.locator('nav')).toBeVisible();
await page.click('nav >> text=Dashboard');
await expect(page).toHaveURL('/dashboard');
// Other sections loaded correctly
await expect(page.locator('[data-testid="recent-orders"]')).toBeVisible();
await expect(page.locator('[data-testid="recommendations"]')).toBeVisible();
Here's what a full resilience test looks like:
test('product page handles invalid price data gracefully', async ({ page }) => {
// Setup: Break the contract
await page.route('/api/product/123', route => {
route.fulfill({
status: 200,
body: JSON.stringify({
id: 123,
name: 'Laptop',
price: 'INVALID', // Type violation
description: 'A great laptop',
inStock: true
})
});
});
// Trigger: Navigate to the page
await page.goto('/product/123');
// Assert 1: Error message is shown
await expect(page.locator('[data-testid="price-error"]'))
.toBeVisible();
await expect(page.locator('[data-testid="price-error"]'))
.toContainText('Unable to display pricing');
// Assert 2: Pricing section is replaced with error fallback
await expect(page.locator('[data-testid="add-to-cart"]'))
.not.toBeVisible();
// Assert 3: Rest of the page works
await expect(page.locator('h1')).toContainText('Laptop');
await expect(page.locator('[data-testid="description"]'))
.toContainText('A great laptop');
await expect(page.locator('[data-testid="stock-status"]'))
.toContainText('In Stock');
// Assert 4: Navigation still functions
await page.click('nav >> text=Home');
await expect(page).toHaveURL('/');
});
Do assert:
Don't assert:
If your error fallback includes actions like "Try Again" or "Report Issue", test those too:
test('user can retry after error', async ({ page }) => {
let attempt = 0;
await page.route('/api/data', route => {
attempt++;
if (attempt === 1) {
// First attempt: bad data
route.fulfill({
status: 200,
body: JSON.stringify({ data: null })
});
} else {
// Second attempt: good data
route.fulfill({
status: 200,
body: JSON.stringify({ data: { value: 42 } })
});
}
});
await page.goto('/dashboard');
// Error shows initially
await expect(page.locator('[data-testid="data-error"]')).toBeVisible();
// Click retry
await page.click('[data-testid="retry-button"]');
// Data loads successfully
await expect(page.locator('[data-testid="data-error"]'))
.not.toBeVisible();
await expect(page.locator('[data-testid="data-value"]'))
.toContainText('42');
});
Error boundary tests can be flaky if you're not careful. Here's how to keep them stable:
Wait for the error state, don't assume timing:
// Bad: Race condition
await page.goto('/profile');
await expect(page.locator('.error')).toBeVisible(); // Might check too early
// Good: Wait for the API call to complete
await page.goto('/profile', { waitUntil: 'networkidle' });
await expect(page.locator('.error')).toBeVisible();
// Bad: Breaks if styling changes
await expect(page.locator('.bg-red-500.text-white.p-4')).toBeVisible();
// Good: Explicit test identifier
await expect(page.locator('[data-testid="profile-error"]')).toBeVisible();
// Ensure element exists before checking its text
const errorMsg = page.locator('[data-testid="error-message"]');
await expect(errorMsg).toBeVisible();
await expect(errorMsg).toContainText('Unable to load');
Each test should verify one type of contract violation. Don't try to test everything at once:
// Bad: Testing too much
test('handles all possible errors', async ({ page }) => {
// Tests missing field, wrong type, null value, empty array...
// 100 lines of assertions
});
// Good: One violation per test
test('handles missing user email field', async ({ page }) => { /* ... */ });
test('handles null profile object', async ({ page }) => { /* ... */ });
test('handles empty recommendations array', async ({ page }) => { /* ... */ });
When a test fails, you should immediately know which contract was violated and what behavior broke. Clear, focused tests make debugging faster.
You've learned how to test error boundaries through contract violations. Now the question becomes: where do these tests belong, and how many do you actually need?
Error boundary tests sit in a specific place in your testing pyramid:
Not Component Tests: Component tests run too fast and too isolated. They mock everything. You want real network interception and real browser rendering to see how errors propagate through the actual component tree.
Not Contract Tests: Contract tests verify that frontend and backend agree on the schema. Error boundary tests assume the contract is already broken and verify the UI handles it gracefully. These are complementary, not overlapping.
E2E Tests (But Selective Ones): These belong in your E2E suite, but not every E2E test needs error scenarios. Focus on critical user journeys where partial failures matter most.
You don't need comprehensive coverage. Here's a practical approach:
Start with critical paths (3-5 tests):Add one test per error boundary: If developers added an error boundary around a section, there should be at least one test that verifies it catches errors in that section.
Focus on recently changed APIs: When backends deploy changes, add a temporary test that simulates the old contract to catch breaking changes. Remove it after a few sprints if no issues arise.
For each critical flow, ask these questions:
Checkout Flow Resilience Checklist:Turn each "what if" into one focused test.
Keep resilience tests separate and clearly labeled:
This makes it clear these tests have a different purpose. When they fail, it signals a resilience regression, not a functional bug.
Don't duplicate across different pages: If ten pages use the same user profile component, you don't need ten tests for missing email fields. Test it once in the most critical flow.
Do update tests when contracts change: When your OpenAPI spec or GraphQL schema changes, review your resilience tests. Remove obsolete ones, add new ones for new contract assumptions.
These tests are a collaboration tool. When you add a resilience test, you're documenting an assumption:
// This test documents that the frontend assumes recommendations
// will always be an array, even if empty. If backend changes this
// to null when there are no recommendations, we need to coordinate.
test('handles empty recommendations array', async ({ page }) => {
// ...
});
Share test failures proactively. When a resilience test fails, it might mean:
Track these metrics to know if your resilience testing is working:
Here's a minimal resilience test suite to start with:
// 1. Critical revenue path
test('checkout handles invalid pricing gracefully');
// 2. User-facing error
test('profile handles missing data gracefully');
// 3. Empty state that's often assumed non-empty
test('product listing handles empty results gracefully');
// 4. Recently changed API
test('new recommendations API handles old contract');
// 5. Cross-cutting concern (navigation)
test('navigation remains functional when dashboard errors');
Five tests. That's enough to start building confidence in your application's resilience. Add more as you encounter production incidents or as new error boundaries are added.
You're not trying to test every possible way things can break. You're building a safety net for the contract violations that actually happen: fields go missing, types change, data becomes optional.
These tests won't catch every bug. But they'll catch the class of bugs that error boundaries are designed to handle. And they'll give you confidence that when things do break in production, your users won't see blank screens. They'll see clear errors, and they'll be able to keep working.
The shift from testing code errors to testing contract violations is a mindset change. You're no longer asking "does this throw an error?" You're asking "when the backend sends unexpected data, does the user experience degrade gracefully?"
Error boundaries are your application's immune system. They don't prevent failures, they contain them. Your job as a QA engineer is to verify that containment works. Not by forcing errors in test code, but by simulating the real-world contract breaks that happen when backends evolve.
Start small. Pick one critical flow. Simulate one contract violation. Verify the error is isolated and the rest of the page works. Then build from there.
Your users will never thank you for tests that pass. But they won't curse your application when it fails gracefully instead of crashing completely. And that's what resilience testing is all about.