Real Workflows That Work
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 4: Real Workflows That Work
This module is documentation of workflows that run. Not demos. Not “you could build something like this.” The actual patterns, with enough detail to replicate or adapt them.
Pattern 1: Site Health Monitor
What it does: Checks nine websites every six hours. Sends a Telegram alert if any site returns a non-200 status or doesn’t respond.
Why it exists: I manage several sites across different hosting providers. I don’t want to manually check them. I want to know within hours if something goes down, not days.
The nodes:
-
Schedule trigger — Cron expression
0 */6 * * *. Fires at 0:00, 6:00, 12:00, 18:00. -
Split in Batches — I have a list of nine URLs set as workflow-level variables. The Split in Batches node processes them one at a time in a loop.
-
HTTP Request — Makes a GET request to the current URL. Timeout set to 10 seconds. Error handling set to “continue on error” — this is important, because if a site is completely down the HTTP request will throw rather than return a status code. Continuing on error means the workflow keeps running and you can check the error data.
-
IF node — Checks two conditions: did the request fail entirely (no response), or did it return a status code that isn’t 200-299? If either condition is true, route to the alert branch.
-
Telegram node (alert branch) — Sends a message formatted with the site URL, the status code (or “no response”), and the timestamp. My Telegram message looks like:
🔴 Site alert URL: {{ $json.url }} Status: {{ $json.statusCode || 'No response' }} Time: {{ $now.toFormat('HH:mm, d MMM') }} -
No-op node (success branch) — Does nothing. The workflow just continues to the next URL.
What to adapt: The URL list, the schedule, the notification destination. If you use Slack instead of Telegram, swap the Telegram node for a Slack node with the same message content. If you want to check every hour, change the cron expression.
One thing to get right: The “continue on error” setting on the HTTP Request node. Without it, the first unreachable site kills the workflow run and you don’t check the rest.
Pattern 2: Contact Form Pipeline
What it does: Receives a contact form submission via webhook, stores it in a database, sends a Telegram notification with the key details.
Why it exists: I built a contact API for jimchristian.net. Rather than having the API endpoint do everything, I had it do the minimum: validate the input, store the submission, and fire a webhook to n8n. n8n handles the notification. This separation means I can change the notification behaviour without touching the API code.
The nodes:
-
Webhook trigger — HTTP POST. The contact API sends a JSON body containing
name,email,subject, andmessage. -
Set node — Formats a notification message. I pull the four fields and build a readable summary:
New contact from {{ $json.body.name }} ({{ $json.body.email }}) Subject: {{ $json.body.subject }} {{ $json.body.message }} -
Telegram node — Sends the formatted message. I use a specific Telegram group for contact notifications so I can reply in-thread.
-
Respond to Webhook — Returns a 200 OK to the API that called the webhook. Without this node, the webhook trigger holds the connection open until it times out. Always close the loop.
What to adapt: The payload fields depend on your form. The notification destination is your choice. If you want to route based on subject or message content — that’s where you’d add a classification step before the Telegram node, using the pattern from Module 3.
Pattern 3: Newsletter Aggregation
What it does: Pulls items from several RSS feeds on a schedule, filters for relevance, and compiles a list of links for review.
Why it exists: I read a lot of sources. Rather than checking each manually, I want a daily digest of items that match a set of topics.
The nodes:
-
Schedule trigger — Daily at 7:00.
-
RSS Read node (multiple) — One per feed. n8n’s RSS Read node fetches feed items and outputs them as individual objects. I merge the outputs from multiple RSS nodes using a Merge node.
-
Limit node — RSS feeds can return many items. I limit to the 20 most recent across all feeds to keep processing time reasonable.
-
Basic LLM Chain — For each item, passes the title and description to a model with a prompt:
The following is an RSS item title and description. Topics I care about: AI tools, automation, self-hosting, developer tools, security. Is this relevant? Reply with only: yes or no. -
IF node — Routes items where the classification returned
yesto the output branch. -
Aggregate node — Collects all the relevant items into a single list.
-
Code node — Formats the list as a readable block of text with titles and links.
-
Telegram node — Sends the digest.
What to adapt: Your feeds, your topics of interest, your classification prompt. The code node formatting is arbitrary — you could output Markdown, plain text, or HTML depending on your destination.
Note on cost: Running an LLM call per RSS item on a daily schedule adds up. For this workflow I use a local Ollama model. Haiku is also cheap enough that the cost is negligible for a personal workflow.
Pattern 4: Scheduled Task Reminder
The simplest pattern. Useful as a building block.
What it does: At a scheduled time, sends a message to remind you of a recurring task.
-
Schedule trigger — Whatever cadence you need. Weekly on Monday at 9:00:
0 9 * * 1. -
Telegram node — Static message: “Weekly review: update project notes and check inbox.”
That’s two nodes. No processing, no conditions, no AI. Sometimes the right tool is the simplest one.
Reading These Patterns
Each of these workflows shares the same anatomy: a trigger, some processing, an output. The trigger type and output destination change. The processing varies from none to several AI-augmented steps.
When you’re designing your own workflows, start by asking:
- What starts this? (Webhook, schedule, manual, event)
- What needs to happen to the data?
- Where does the result go?
Answer those three questions and you have the skeleton. The implementation is filling in nodes.
The final module covers how n8n and Claude Code work alongside each other.
Check Your Understanding
Answer all questions correctly to complete this module.
1. Why is 'continue on error' critical for the site health monitor?
2. Why does the contact form pipeline handle notifications through n8n instead of the API?
3. What does the newsletter aggregation workflow use AI for?
Pass the quiz above to unlock
Save failed. Please try again.