Docs
Getting Started
Make.com Integration

Make.com Integration

Complete guide to integrating HeadlessX with Make.com (formerly Integromat) automation

Automate web scraping with Make.com and HeadlessX. Build powerful visual automation scenarios without code.

Quick Setup


Step 1: Add HTTP Module

  1. Create a new scenario
  2. Add HTTPMake a request module
  3. Configure connection

Step 2: Configure Request

URL: http://your-headlessx-server:3001/api/website/html

Method: POST

Headers:

{
  "Content-Type": "application/json",
  "X-API-Key": "your-api-key-here"
}

Body (JSON):

{
  "url": "{{url}}",
  "stealth": true
}

Step 3: Parse Response

  1. Add JSONParse JSON module
  2. Map output from HTTP module
  3. Access data: {{data.html}}, {{data.title}}

Complete Scenario Examples


Example 1: RSS Feed → Scrape → Google Sheets

Modules:

  1. RSS - Watch RSS feed items

    • Feed URL: https://example.com/feed
    • Limit: 10
  2. HTTP - Make a request

    • URL: http://localhost:3001/api/website/content
    • Method: POST
    • Headers: X-API-Key: your-key
    • Body:
      {
        "url": "{{1.url}}",
        "stealth": true
      }
      
  3. JSON - Parse JSON

    • JSON string: {{2.data}}
  4. Google Sheets - Add a row

    • Spreadsheet: My Articles
    • Sheet: Sheet1
    • Values:
      • Title: {{1.title}}
      • Content: {{3.data.markdown}}
      • URL: {{1.url}}
      • Date: {{1.pubDate}}

Example 2: Scheduled Price Monitoring

Modules:

  1. Tools - Set multiple variables

    • Variables:
      {
        "products": [
          "https://store.com/product1",
          "https://store.com/product2"
        ]
      }
      
  2. Flow Control - Iterator

    • Array: {{1.products}}
  3. HTTP - Make a request

    • URL: http://localhost:3001/api/website/html
    • Body:
      {
        "url": "{{2.array}}",
        "stealth": true
      }
      
  4. Text parser - Match pattern

    • Text: {{3.data.html}}
    • Pattern: <span class="price">\\$(\\d+\\.\\d+)</span>
  5. Google Sheets - Update a row

    • If price changed
  6. Gmail - Send an email

    • Only if price dropped

Example 3: Content Republishing

Modules:

  1. Webhooks - Custom webhook

    • Create webhook URL
  2. HTTP - Scrape article

    • URL: http://localhost:3001/api/website/content
    • Body: {"url": "{{1.articleUrl}}"}
  3. OpenAI - Create a completion

    • Prompt: Rewrite this article:\n{{2.data.markdown}}
  4. HTTP - Upload images to S3

    • Extract images from HTML
  5. WordPress - Create a post

    • Title: {{2.data.title}}
    • Content: {{3.choices[].message.content}}

HeadlessX Custom App


Create a custom Make.com app for easier integration:

App Structure

Base URL: http://your-server:3001

Authentication: API Key

  • Header: X-API-Key
  • Value: User's API key

Modules

1. Scrape HTML

  • Action: Make an API call
  • Endpoint: /api/website/html
  • Method: POST
  • Input:
    • URL (required)
    • Stealth (boolean)
    • Proxy (text)
  • Output:
    • success (boolean)
    • data.html (text)
    • data.title (text)
    • data.statusCode (number)

2. Scrape with JavaScript

  • Endpoint: /api/website/html-js
  • Input:
    • URL (required)
    • Wait For (select: load, networkidle, domcontentloaded)
    • Timeout (number)
  • Output: Same as Scrape HTML

3. Get Markdown Content

  • Endpoint: /api/website/content
  • Output:
    • data.markdown (text)
    • data.text (text)
    • data.wordCount (number)

4. Take Screenshot

  • Endpoint: /api/website/screenshot
  • Input:
    • Full Page (boolean)
    • Format (select: png, jpeg)
    • Quality (number 0-100)
  • Output:
    • data.screenshot (base64)
    • data.width (number)
    • data.height (number)
  • Endpoint: /api/google-serp/search
  • Input:
    • Query (required)
    • Location (text)
    • Num Results (number)
  • Output:
    • data.organicResults (collection)
    • data.featuredSnippet (object)

Triggers

New Scrape Completed

  • Type: Webhook
  • Endpoint: /webhook/scrape-completed
  • Output: Scrape results

Advanced Scenarios


Scenario 1: E-commerce Monitoring

Schedule (every 6 hours)
  → Iterator (product URLs)
    → HTTP (Scrape product page)
      → Text Parser (Extract price)
        → Router
          → Path A: Price dropped
            → Email notification
            → Update Google Sheets
          → Path B: Out of stock
            → Slack alert
          → Path C: No change
            → Log to Data Store

Scenario 2: Lead Generation Pipeline

Google Sheets (Watch new rows)
  → HTTP (Scrape company website)
    → Text Parser (Extract email/phone)
      → OpenAI (Enrich data)
        → HubSpot (Create contact)
          → Gmail (Send outreach)

Scenario 3: Content Aggregator

Multiple RSS Modules (different sources)
  → Aggregator
    → HTTP (Scrape full articles)
      → OpenAI (Summarize)
        → Airtable (Store)
          → Twitter (Share best content)

Error Handling


Error Handler Route

Add error handler to HTTP module:

  1. Right-click HTTP module
  2. Add error handler
  3. Choose Break or Commit

Break: Stop scenario Commit: Continue with fallback

Retry Logic

Use Resuming error handler:

  1. Set max attempts: 3
  2. Set interval: 30 seconds
  3. Add delay between retries

Error Notification

HTTP Module
  → Error Handler
    → Gmail - Send error notification
      → Include error details

Data Transformation


Extract Data with Tools

Text parser - Match pattern:

Pattern: <title>(.*?)</title>
Input: {{scrapeResult.html}}
Output: {{title}}

JSON - Parse JSON:

Input: {{httpResponse.body}}
Output: Structured data

Transform with Functions

Get a variable:

{{substring({{html}}, 0, 100)}}  // First 100 chars
{{replace({{text}}, "\n", " ")}} // Remove newlines
{{length({{array}})}}             // Array length

Routers & Filters


Conditional Routing

Router after scrape:

Route 1: Success

  • Filter: {{success}} = true
  • Action: Process data

Route 2: CAPTCHA

  • Filter: {{error.code}} = CAPTCHA_DETECTED
  • Action: Alert admin

Route 3: Timeout

  • Filter: {{error.code}} = TIMEOUT_ERROR
  • Action: Retry with longer timeout

Data Stores


Cache Scrape Results

Modules:

  1. Data store - Search records

    • Key: {{1.url}}
    • If found → Use cached data
  2. Router

    • Route A: Cache hit → Return cached
    • Route B: Cache miss → Scrape fresh
  3. HTTP - Scrape (Route B only)

  4. Data store - Add record

    • Key: {{1.url}}
    • Value: {{3.data}}
    • TTL: 3600 (1 hour)


Slack Integration

HTTP (Scrape)
  → Text parser (Extract key info)
    → Slack - Send message
      → Channel: #scraping-alerts
      → Message: "New data: {{info}}"

Google Sheets

Google Sheets (Watch new rows)
  → HTTP (Scrape each URL)
    → Google Sheets (Update row)
      → Status: "Scraped"
      → Data: {{result}}

Airtable

HTTP (Scrape)
  → Airtable - Create a record
    → Base: Website Data
    → Table: Scraped Pages
    → Fields:
      - URL: {{url}}
      - Title: {{title}}
      - Content: {{markdown}}

Notion

HTTP (Scrape)
  → Notion - Create a page
    → Database: Knowledge Base
    → Properties:
      - Title: {{title}}
      - URL: {{url}}
      - Content: {{markdown}}

Scheduling


Time-based Triggers

Schedule module options:

  • Every X minutes: 15, 30, 60
  • Every X hours: 1, 6, 12, 24
  • Specific times: Daily at 9:00 AM
  • Custom: Cron expression

Example Cron:

0 9 * * 1-5  # 9 AM, Monday-Friday
0 */6 * * *  # Every 6 hours

Best Practices


1. Batch Processing

Don't scrape 1000 URLs at once:

  • Use Iterator with batches of 5-10
  • Add Sleep module between batches (2-5 seconds)

2. Error Resilience

  • Always add error handlers
  • Set retry logic (max 3 attempts)
  • Log errors to Data Store

3. Data Validation

Add Filter after scrape:

  • Check success = true
  • Validate required fields exist

4. Resource Management

  • Limit concurrent operations (3-5)
  • Use Data Store for caching
  • Clean up old data regularly

5. Monitoring

  • Enable email notifications for errors
  • Use Make's execution history
  • Set up alerts for critical scenarios

Scenario Templates


Template 1: Daily News Digest

{
  "name": "Daily News Scraper",
  "modules": [
    {
      "type": "schedule",
      "schedule": "0 8 * * *"
    },
    {
      "type": "iterator",
      "array": ["url1", "url2", "url3"]
    },
    {
      "type": "http",
      "url": "http://localhost:3001/api/website/content",
      "body": {"url": "{{item}}"}
    },
    {
      "type": "aggregator",
      "combineArticles": true
    },
    {
      "type": "gmail",
      "subject": "Daily News Digest",
      "body": "{{articles}}"
    }
  ]
}

Template 2: Product Price Tracker

{
  "name": "Price Monitor",
  "modules": [
    {
      "type": "schedule",
      "interval": 360
    },
    {
      "type": "googleSheets",
      "operation": "getRows",
      "sheet": "Products"
    },
    {
      "type": "http",
      "url": "http://localhost:3001/api/website/html"
    },
    {
      "type": "textParser",
      "pattern": "price"
    },
    {
      "type": "router",
      "routes": [
        {
          "filter": "priceDropped",
          "action": "sendAlert"
        }
      ]
    }
  ]
}

Troubleshooting


Scenario Not Running

  1. Check scenario is ON
  2. Verify trigger is configured
  3. Review execution history

HTTP Request Fails

  1. Test URL manually:
    curl http://localhost:3001/health
    
  2. Verify API key in headers
  3. Check request body format

Data Not Mapping

  1. Click Choose where to map
  2. Test previous module
  3. Verify field names match

Rate Limits

Make.com limits:

  • Free: 1,000 operations/month
  • Core: 10,000 operations/month
  • Pro: 100,000+ operations/month

Pricing & Limits


PlanOperations/MonthScenariosData Transfer
Free1,0002100 MB
Core10,000Unlimited1 GB
Pro100,000Unlimited10 GB
Teams1,000,000Unlimited100 GB

1 Operation = 1 module execution


Next Steps