WEDC Team 7 min read

A Beginner's Guide to AI Automation with n8n

n8n is the open-source workflow automation platform that connects your APIs, databases, and AI models without writing a single line of glue code. This guide covers installation, your first workflow, and practical AI-powered automations.

What is n8n?

n8n (pronounced "n-eight-n") is a fair-code licensed workflow automation platform. Think Zapier or Make, but self-hosted, extensible, and with native support for large language model integrations. As of 2026, n8n ships with built-in nodes for OpenAI, Anthropic, Ollama (for local models), and dozens of other AI services.

Self-Hosting n8n

The quickest path to a running n8n instance is Docker Compose:

services:
  n8n:
    image: n8nio/n8n:latest
    restart: unless-stopped
    ports:
      - "5678:5678"
    environment:
      N8N_BASIC_AUTH_ACTIVE: "true"
      N8N_BASIC_AUTH_USER: admin
      N8N_BASIC_AUTH_PASSWORD: changeme
      WEBHOOK_URL: https://n8n.yourdomain.com
    volumes:
      - n8n_data:/home/node/.n8n

volumes: n8n_data:

Point Caddy or Nginx at port 5678, provision a TLS certificate, and you have a production-ready automation server.

Core Concepts

Workflows are directed graphs. Each node is a step: trigger, action, or transformation. Data passes between nodes as arrays of JSON objects called items.

Triggers start workflows:

  • Webhook (HTTP POST from any external service)
  • Schedule (cron expression)
  • File watch
  • Manual ("Test workflow" button)

Nodes do work:

  • HTTP Request — call any REST API
  • Code — run JavaScript or Python
  • AI Agent — orchestrate LLM reasoning loops
  • Database — read/write Postgres, MySQL, SQLite

Your First Workflow: Summarize RSS Feed Items with an LLM

Goal: Every morning, fetch the last 24 hours of posts from an RSS feed, summarize each one with an LLM, and email you a digest.

Step 1 — Schedule Trigger

Add a "Schedule Trigger" node. Set it to run daily at 07:00.

Step 2 — HTTP Request to RSS Feed

Add an "HTTP Request" node:

  • Method: GET
  • URL: https://news.ycombinator.com/rss

Add a "RSS Feed Read" node (built-in) to parse the XML into structured items.

Step 3 — Filter by Date

Add a "Filter" node to keep only items where pubDate is within the last 24 hours:

// Filter node expression
{{ new Date($json.pubDate) > new Date(Date.now() - 86400000) }}

Step 4 — Summarize with OpenAI

Add an "OpenAI" node:

  • Operation: Message a model
  • Model: gpt-4o-mini (cheap, fast)
  • Prompt: Summarize this article in 2 sentences for a developer audience: {{ $json.description }}

Step 5 — Aggregate and Send Email

Use the "Aggregate" node to collect all summaries, then an "Email Send" (SMTP) node to deliver the digest.

Total build time: about 20 minutes. No code beyond one filter expression.

AI Agent Workflows

n8n's AI Agent node lets you build ReAct-style reasoning loops. The agent has:

  • A system prompt defining its role and constraints.
  • Tools — other n8n nodes it can call (HTTP Request, database queries, calendar lookups).
  • A memory node to retain context across calls.

Example: Automated Inbox Triage

  • Trigger: new email arrives (IMAP trigger)
  • Agent prompt: *"Classify this email as [urgent / informational / spam]. If urgent, create a task in Todoist and reply acknowledging receipt."*
  • Tools available to the agent: HTTP Request (Todoist API), Gmail Send.
  • The agent reasons through the email, decides on a classification, and calls the appropriate tool — all without a predefined decision tree.

    Practical AI Automation Ideas for Developers

    WorkflowTools UsedTime to Build Summarize GitHub Issues dailyGitHub + OpenAI25 min Auto-tag support ticketsWebhook + Classifier15 min Generate alt-text for imagesS3 trigger + Vision API30 min Monitor server logs for anomaliesSSH + LLM40 min Weekly changelog from git commitsGit + LLM20 min

    Connecting to Local Models with Ollama

    For sensitive data you don't want leaving your network, swap OpenAI for Ollama:

    # Add to your compose file
      ollama:
        image: ollama/ollama:latest
        restart: unless-stopped
        volumes:
          - ollama_data:/root/.ollama
    

    In n8n, use the "Ollama Chat Model" node and point it at http://ollama:11434. Models like llama3.2, mistral, and phi4 run well on a machine with 16 GB RAM.

    Conclusion

    n8n makes AI automation accessible to solo developers without requiring a data engineering background. Self-hosting gives you full control over your data and eliminates per-task pricing. The WEDC member library includes a curated set of n8n workflow templates for common developer automation scenarios, updated monthly.

    Enjoyed this article?

    WEDC members get access to the full library of tutorials, downloadable utility applications, and monthly configuration bundles — plus new content every week.