How I Built My Second Brain

Maciek Marchlewski

Maciek Marchlewski

15

How I Built My Second Brain β€” Cover
Podcast
How I Built My Second Brain
0:00
--:--

I work in RevOps. I build pipelines, design routing rules, automate handoffs. Operations is what I do.

So when I decided to build an AI agent system, I did what any ops person would do: I planned the hell out of it.

Fifty hours (50). Two weeks (2). I built an orchestration framework called Agent0 with five core agents, sixteen dashboard screens, a journal system, and a thirteen-phase implementation roadmap. I wrote two system documents - one at 960 lines, the other at 1,188. I brainstormed fifteen app ideas. I mapped dependencies, drew architecture diagrams, wrote governance policies.

I had everything except a working product.

The documents were beautiful. Organized into clean sections with headers and subheaders and tables of contents. The kind of documentation that looks productive right up until you realize you haven't shipped a single feature.

Fifty hours is a full work week plus overtime. A competent developer can build and deploy a SaaS MVP in that time. I had built a monument to planning.

The 50-Hour Trap β€” planning addiction vs shipping in AI agent development
The 50-Hour Trap β€” planning addiction vs shipping in AI agent development

The Wake-Up Call

"After two weeks and 50 hours, I'm back to installing a new tool. This happens to me often. I don't get to the finish line." Is what I told myself.

I started tracing the pattern. Every time I got close to the uncomfortable part - actually shipping something, putting it in front of people, committing to one path - I'd find a reason to pivot. A new framework was more elegant. A new tool had better features. A new architecture solved a problem I hadn't even hit yet.

The feature backlog kept growing. The shipped features column stayed at zero.

This is what I now call the "new tool trap." It looks like progress because you're learning, installing, configuring. You have tabs open and terminals running. But the finish line isn't getting closer - it's getting further away, because every new tool resets the clock.

The honest version: I was substituting the dopamine of discovery for the discipline of delivery. Learning a new tool feels good. Shipping a product that might fail feels scary. My subconscious had figured out how to stay in the fun zone indefinitely.

The New Tool Trap β€” a cycle of installing tools without shipping features
The New Tool Trap β€” a cycle of installing tools without shipping features

Workflow Before Tool

The breakthrough came from my own job.

In revenue operations, every ops leader knows the principle half of them ignore: you never buy the tool before you map the process. You don't sign a Salesforce contract and then figure out your sales stages. You map the workflow first - the handoffs, the bottlenecks, the decision points. Then you pick the tool that fits.

I had violated my own first principle. I built Agent0 - the orchestration engine, the framework, the dashboard, the governance model - before I had a single process to orchestrate. It's like buying a $50,000 CRM to manage three leads.

So I flipped the question. Instead of "what should my AI agent system do?" I asked: "what's my actual daily workflow that's broken?"

breakthrough
β€œIdeas leak.”
The answer turned out to be embarrassingly simple.
While building AgentOS, I realized the biggest problem is not a lack of ideas.
It is that ideas go nowhere unless the system is designed to catch them, structure them, and turn them into action.

Every day, I read posts, watch videos, have conversations, notice things that could become articles or products or experiments. And 90% of it disappears. Sharp thought in the shower - gone by the time I'm dressed. Brilliant article on my phone - never found again. Conversation that sparks an idea - forgotten by next week.

I didn't need an AI agent framework with sixteen dashboard screens. I needed a capture-process-reflect system. Something that catches the raw material, helps me process it, and surfaces patterns I can't see on my own.

In RevOps, we map the process before we buy the tool. I was buying the tool before I had a process.
Workflow Before Tool β€” the RevOps principle applied to AI agent systems
Workflow Before Tool β€” the RevOps principle applied to AI agent systems

One Night, One System
AgentOS v.1.0.0 was re-born

I sat down on a Saturday night with two tools and zero plan documents.

Claude Code is an AI coding assistant that operates directly on your file system from the terminal. Obsidian is a note-taking app where every note is a plain markdown file that links to other notes, creating a web of connected ideas with no vendor lock-in.

I opened both and started building my Second Brain directly in Obsidian

My brain has 14 section, each with a clear job:

  • ‍Inbox catches everything - links from my phone, ideas from conversations, screenshots. Every 2 hours, an AI agent processes whatever lands here.‍
  • Daily Journal is where I brain dump. I write my entry. My Ai assistant (Corey Marks) writes its analysis alongside mine - patterns it noticed, tasks it extracted, things I'm avoiding.‍
  • Ideas collects concepts worth developing.
  • ‍Projects tracks active work.
  • ‍Decisions logs every choice with reasoning.
  • ‍Governance defines what agents can and can't do.‍
  • Tasks is the interesting one - populated entirely by agents, not me. They propose work, I approve or reject. Every rejection trains the system.‍
  • Operations holds SOPs and recurring workflows.
  • ‍Artifacts stores everything the AI produces - articles, briefs, plans.
  • ‍Personal and People track life and relationships. πŸ‘§β€οΈπŸ‘§
  • ‍Knowledge Base holds reference material the AI learns from.
  • ‍Archive is where completed work goes - never deleted, just moved.

Here is a diagram showing worfklow and process between my Agents and my Brain

MySecondBrain System Architecture β€” Human, Agent, Cloud, and Governance layers
MySecondBrain System Architecture β€” Human, Agent, Cloud, and Governance layers

That structure solved the "ideas leak" problem. But the interesting part isn't the folders. It's what happens automatically.

I built four scheduled AI agents that run 24/7 in Anthropic's cloud:

The Agent Team

Agent Role Schedule
Coordinator Routes tasks to the right agent. Makes single-thread decisions. Keeps everything moving. On demand
Research Lead Deep research, SEO audits, competitor analysis, schema markup, site architecture. On demand
Growth Lead A/B testing, analytics, paid ads, pricing strategy, referral programs, marketing psychology. On demand
Content Lead Copywriting, articles, briefs, ad creative, lead magnets, sales enablement. On demand
Social Lead Social content, cold email, email sequences, platform-native distribution. On demand
Operations Lead CRO, onboarding optimization, churn prevention, RevOps workflows. On demand
Code Reviewer Reviews completed work against plans and coding standards. On demand
Scheduled Cloud Agents (run 24/7 autonomously)
Vault Heartbeat Processes inbox captures, scans vault for work, proposes tasks via PR. Every 2 hours
Nightly Agent Entry Reads daily journal, writes AI analysis alongside mine, flags patterns. 10pm ET daily
Weekly Reflection Distills weekly patterns, compares priorities vs actual behavior. Sunday 8pm ET
Monthly Reflection Big picture review against goals. What to double down on, what to drop. 1st of month

11 agents total. 7 on demand. 4 running autonomously. Every scheduled agent creates a Pull Request β€” nothing merges without my approval.

Every agent creates a Pull Request on GitHub - a proposed change that needs approval before it goes live. Nothing merges without my sign-off. The vault syncs between my phone, my laptop, and the cloud automatically.

One night. One working system. No thirteen-phase roadmap.

The agents don't replace my thinking. They reflect it back to me - patterns I can't see because I'm too close.

Here's what the vault looks like in practice.

Every node is a note. Every line is a connection. Watch how ideas, projects, and daily entries form clusters of connected thinking:

The Governance Problem

AI Agent Governance Feedback Loop β€” how trust builds autonomy over time
AI Agent Governance Feedback Loop β€” how trust builds autonomy over time

‍

Here's what every AI automation article skips: what happens when the agents do something you don't want?

I had 19 commands and 7 specialized agents. One can rewrite journal entries. Another can reorganize vault structure. A third can create and assign tasks. Without guardrails, that's not a productivity system - that's chaos with a nice interface.

19 Commands That Run the System

Command What It Does
Session Flow
/daily Brain dump + AI analysis + task review. Start every session here.
/tasks Review agent-proposed tasks. Approve, reject with feedback, or skip.
/sync Pull agent PRs from GitHub, auto-merge safe ones, push local changes.
/tldr Summarize the session, save to vault, update AI memory. End every session here.
Capture & Briefing
/capture Drop a link, idea, screenshot, or PDF. AI extracts, synthesizes, and stores it.
/standup Cross-domain briefing across all projects, tasks, and priorities.
/context Load full life and work state into the session. Projects, priorities, recent activity.
/reflect Generate a weekly or monthly reflection. Patterns, wins, misses, life balance.
Thinking Tools
/trace Track how an idea evolved over time across the vault. Timeline + connections.
/connect Find unexpected bridges between two topics using the vault's link graph.
/ghost Answer a question as me β€” grounded in my vault writing, beliefs, and positions.
/challenge Pressure-test my beliefs. Find contradictions and weak assumptions in my notes.
/drift Surface recurring themes I'm circling subconsciously across unrelated notes.
/emerge Find idea clusters that are coalescing into potential projects.
Content Pipeline
/distribute Take an article and create LinkedIn, X thread, and X post versions.
/publish Push content to Webflow, LinkedIn, and X via Postiz CLI.
System
/changelog Auto-logged. Tracks every system-level change to the framework.

19 commands. 69 total skills behind them. Session flow runs the daily ritual. Thinking tools turn notes into insights. Content pipeline turns insights into published articles. The system tracks its own evolution.

The governance framework has three layers.

  • ‍Layer one: explicit permissions. Every agent has a defined scope - what it can read, write, and cannot touch. These aren't suggestions. They're hard boundaries.
  • ‍Layer two: the Approval Matrix. Low-risk actions like inbox processing get auto-execute. Anything that modifies vault structure, project status, or the system itself needs my approval.‍
  • Layer three - the most important: the Rejection Log. When I reject a proposed change, I say why. Every rejection gets logged. The agents read the rejection log before proposing new changes. The same mistake doesn't happen twice. Consistent approvals get upgraded to auto-execute. Repeated rejections train the system to avoid that pattern. Autonomy grows as trust builds.

This is exactly how you'd onboard a new BDR. You don't give them production access to your CRM on day one. You start with limited permissions, propose-and-review, clear feedback. Over weeks, they earn broader access because they've demonstrated good judgment. Same model for AI agents.

Governance isn't overhead. It's how you turn an AI from a tool into a teammate.

What's Running Right Now

The system today: 19 slash commands, 4 cloud agents running 24/7, 69 total skills, a governance framework with explicit permissions, and an active rejection log.

The first thing the nightly agent wrote stopped me cold: "19-day gap since last entry. Ideas are leaking." It flagged exactly the problem the system was built to solve. It also surfaced carryover tasks I'd been avoiding for weeks - not because they were hard, but because I didn't want to face them. An AI calmly noting you've ignored the same three tasks for three weeks is an uncomfortable mirror. A useful one.

The content pipeline turns this into real output. A Kanban board inside Obsidian tracks articles from idea through publish. This article went through that pipeline - captured in inbox, processed by the heartbeat agent, promoted to the content pipeline, drafted. The documentation of building the system becomes the content, which becomes the proof, which builds the audience. It's a flywheel, and it's spinning.

The Agent Mirror β€” AI surfacing patterns the founder couldn't see
The Agent Mirror β€” AI surfacing patterns the founder couldn't see

What I'd Tell My Two-Weeks-Ago Self

Five lessons, earned the expensive way.

1. Start with the process, not the product. Write down your actual daily workflow on paper. Where does information leak? That's your first build target.

2. Ship the simplest version tonight. Not this weekend. Not after the architecture doc. The version that embarrasses you is the version that teaches you.

3. Governance from day one. Don't add guardrails after something goes wrong. Explicit permissions, approval flows, rejection logging. It trains the AI from the first interaction, not after it's developed bad habits.

4. Let the AI challenge you. I built commands that pressure-test my beliefs and surface patterns in my thinking - assumptions I keep making, topics I keep avoiding. The most valuable thing an AI can do isn't complete your tasks. It's show you your blind spots.

5. Document everything. Every system change logged. Every decision tracked with reasoning. Every rejection explained. It makes the AI smarter, gives you an audit trail, and produces a continuous stream of content about your process.

The best automation isn't the one that does the most. It's the one you actually use every day.

If I Were Starting From Zero

You're in MarketingOps or RevOps. You're curious about AI agents but you're not a developer. Here's the action list.

Week 1: Install Obsidian and Claude Code. Write daily notes for seven days. No structure, no templates. Just write.

Week 2: Read your notes back. You'll see the pattern - what falls through cracks, what you keep circling back to, what you mention but never act on. That's your first build target.

Week 3: Build one command that solves one problem. Not a system. Not a framework. One command, one fix, ship it.

Week 4: Add governance before you add agents. Define permissions. Set up the rejection log. Run everything in propose mode first - the agent suggests, you approve.

The system keeps getting smarter. Every rejection teaches it what you don't want. Every approval teaches it what you do. The journal entries give it context. The changelog gives it history.

It's not a tool. It's a teammate that learns. And unlike every other tool I installed and abandoned over those first two weeks, this one I actually use every day.

Because I built the workflow first.

‍

‍

Frequently Asked Questions

What is a "second brain" and how is it different from regular note-taking?

A second brain is a capture-process-reflect system where ideas, notes, and knowledge are stored, connected, and surfaced automatically. Unlike regular note-taking, which is passive storage, a second brain actively links related concepts, surfaces patterns you'd miss, and - in this case - uses AI agents to process and analyze your thinking on a schedule.

Do I need to know how to code to build this?

No. The entire system was built using Claude Code (an AI coding assistant you talk to in plain English) and Obsidian (a markdown note-taking app). You describe what you want, the AI builds it. The hardest part isn't technical - it's mapping your workflow first.

How much does this cost to run?

Obsidian is free for personal use. Claude Code requires an Anthropic API subscription. The cloud agents running 24/7 cost roughly $20-40/month in API usage depending on vault size and agent frequency. Less than most SaaS subscriptions, and it compounds in value.

What if the AI agents make mistakes or do something I don't want?

That's what the governance framework handles. Every agent operates through a Pull Request system - it proposes changes, you approve or reject. The Rejection Log teaches agents your preferences over time. Nothing modifies your vault without your explicit sign-off.

How long before the system starts being useful?

Week one. Even before agents are running, the daily capture habit alone stops ideas from leaking. By week two, you start seeing patterns in your own thinking. By week four with agents active, the system is surfacing connections and flagging blind spots you'd never catch on your own.

Can I use different tools instead of Obsidian and Claude Code?

The tools matter less than the workflow. The core principles - capture everything, process regularly, reflect on patterns, govern the agents - work with any combination of tools that supports markdown files and AI integration. Obsidian and Claude Code just happen to be the lowest-friction pairing right now.