Handling Duplicate Data
Segment guarantees that 99% of your data won’t have duplicates within a 24 hour look-back window. Warehouses and Data Lakes also have their own secondary deduplication process to ensure you store clean data.
Segment has a special deduplication service that sits behind the
api.segment.com endpoint and attempts to drop 99% of duplicate data. Segment stores 24 hours worth of event
message_ids, allowing Segment to deduplicate any data that appears within a 24 hour rolling window.
Segment deduplicates on the event’s
message_id, not on the contents of the event payload. Segment doesn’t have a built-in way to deduplicate data over periods longer than 24 hours or for events that don’t generate
Keep in mind that Segment’s libraries all generate
message_ids for each event payload, with the exception of the Segment HTTP API, which assigns each event a unique
message_id when the message is ingested. You can override these default generated IDs and manually assign a
message_id if necessary.
Duplicate events that are more than 24 hours apart from one another deduplicate in the Warehouse. Segment deduplicates messages going into a Warehouse based on the
message_id, which is the
id column in a Segment Warehouse.
Data Lake deduplication
To ensure clean data in your Data Lake, Segment removes duplicate events at the time your Data Lake ingests data. The Data Lake deduplication process dedupes the data the Data Lake syncs within the last 7 days with Segment deduping the data based on the
This page was last modified: 23 Sep 2021
Questions? Problems? Need more info? Contact us, and we can help!