Mammoth Destination

Mammoth provides self-serve analytics for analysts, businesses, and developers who can leverage Mammoth’s data warehousing, data discovery & data preparation abilities to arrive at insights.

Mammoth allows you to blend your data from Segment with other sources of data such as databases and files. Using Mammoth, you can build multiple data pipelines, which are constructed by applying transforms through a no coding interface. Mammoth also allows for the visual discovery of the data and easy exports to databases such as MySQL, elasticsearch, and PostgreSQL.

This destination is maintained by Mammoth. For any issues with Mammoth Destination, please reach out to their team.

NOTE: The Mammoth Destination is currently in beta, which means that they are still actively developing the destination. This doc was last updated on June 6, 2019. If you are interested in joining their beta program or have any feedback to help improve the Mammoth and its documentation, please let their team know!

Getting Started

The first step is to make sure Mammoth supports the source type and connection mode you’ve chosen to implement. You can learn more about what dictates the connection modes we support here.

WebMobileServer
📱 Device-mode
☁️ Cloud-mode

There are three steps to get started using Mammoth with Segment. You can register for an account with Mammoth by clicking here.

  1. Create a webhook dataset on Mammoth & copy the API KEY.
  2. Connect Segment and Mammoth.
  3. Use the Extract from JSON task to flatten data.

1. Create a webhook dataset on Mammoth.

Mammoth Segment destination requires a dataset on Mammoth’s side. There are multiple types of datasets you can add. You want to add a webhook type of dataset on Mammoth for Segment Integration.

  1. Log into app.mammoth.io.
  2. You need to create a new dataset of type webhooks. To do so, click on the big green button in the data library and click on the option Webhooks. If you do not have any datasets in your account, you will see a button to add a webhook dataset on the data library itself.
  3. This will open the add dataset dialog. Make sure the option selected is Webhooks.
  4. Give your dataset a name & click on Done. A new dataset will appear in the data library.

The dataset you created will have an API KEY which will be needed to proceed on Segment UI. Here is how you copy it:

  1. Click on the new dataset you created in the previous step.
  2. On the preview panel, copy the API key by clicking on copy.

2. Connect Segment and Mammoth.

  1. In the Segment App, select Add Destination. Search for and select Mammoth.
  2. Paste the API KEY you copied in the previous step into the UI.

3. Use the Extract from JSON task to flatten data

Once you are configured according to the previous steps, data should start flowing into Mammoth. Mammoth will store all the data received from Segment in this dataset. You can use the Extract from JSON task to flatten the data into rows and columns. Once you have the data in a flat format, you can use Mammoth’s capabilities to set up any number of pipelines you need.

  1. When Mammoth receives data the, REFRESH button will show up in the preview panel. Click it to add that data to the dataset from the staging area.
  2. Select the dataset and click on the open button.
  3. You will be taken to the default View on the dataset. You will see one column of data called JSON.
  4. Now we want to flatten the JSON data. Open the ADD TASK menu and click on the Extract from JSON task.
  5. Use the Extract from JSON task as needed to flatten the data. Extract from JSON task will automatically suggest you the right options, and all you need to do is hit Apply. You can read more about Extract from JSON task here.
  6. You may need to apply the Extract from JSON task multiple times if the data is nested.

Mammoth will automatically refresh the data approximately every hour. You can also click on the REFRESH button to sync data immediately any time.

Hints and Tips

Tasks you create with Mammoth do not modify your original data from Segment. You can reuse the original data and set up multiple task pipelines by creating multiple views on the same dataset.

You may also want to use the Apply filter task along with Extract from JSON task to flatten only certain types of data.

Once you have converted the JSON data into a row vs. column format, you can

  • Use the EXPLORE menu and explore the data in any of the columns.
  • Use other tasks provided by the ADD TASK menu to arrive at insights and automate reports.
  • Export the data to another system from Mammoth.

Mammoth recommends that you use the Save as Dataset task in the ADD TASK menu to save your flattened data as a new dataset. Using this method, you separate your JSON extractions from your analysis & reporting.

Page

If you haven’t had a chance to review our spec, please take a look to understand what the Page method does. An example call would look like:

analytics.page()

Page calls will be sent to the webhook dataset you created earlier. You can filter this data into a different view after you have set up JSON extract pipelines.

Screen

If you haven’t had a chance to review our spec, please take a look to understand what the Screen method does. An example call would look like:

[[SEGAnalytics sharedAnalytics] screen:@ "Home"];

Screen calls will be sent to the webhook dataset you created earlier. You can filter this data into a different view after you have set up JSON extract pipelines.

Identify

If you haven’t had a chance to review our spec, please take a look to understand what the Identify method does. An example call would look like:

analytics.identify('userId123', {
  email: 'john.doe@segment.com'
});

Identify calls will be sent to the webhook dataset you created earlier. You can filter this data into a different view after you have set up JSON extract pipelines.

Track

If you haven’t had a chance to review our spec, please take a look to understand what the Track method does. An example call would look like:

analytics.track('Clicked Login Button')

Track calls will be sent to the webhook dataset you created earlier. You can filter this data into a different view after you have set up JSON extract pipelines.


Personas

You can send computed traits and audiences generated through Segment Personas to this destination as a user property. To learn more about Personas, reach out for a demo.

For user-property destinations, an identify call will be sent to the destination for each user being added and removed. The property name will be the snake_cased version of the audience name you provide with a true/false value. For example, when a user first completes an order in the last 30 days, we will send an identify call with the property order_completed_last_30days: true, and when this user no longer satisfies we will set that value to false.

When the audience is first created an identify call is sent for every user in the audience. Subsequent syncs will only send updates for those users which were added or removed since the last sync.

Settings

Segment lets you change these destination settings via your Segment dashboard without having to touch any code.

API Key

You can find your API key in the webhook dataset’s preview panel.

Adding Mammoth to the integrations object

To add Mammoth to the integrations JSON object (for example, to filter data from a specific source), use one of the 2 valid names for this integration:
  • mammoth
  • Mammoth


  • Questions? Need help? Contact us!
    Can we improve this doc?
    Email us: docs-feedback@segment.com!