Skip to content

flepied/second-brain-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation


๐Ÿง  Second Brain AI agent

Introducing the Second Brain AI Agent Project: Empowering Your Personal Knowledge Management

Are you overwhelmed with the information you collect daily? Do you often find yourself lost in a sea of markdown files, videos, web pages, and PDFs? What if there's a way to seamlessly index, search, and even interact with all this content like never before? Welcome to the future of Personal Knowledge Management: The Second Brain AI Agent Project.

๐Ÿ“ Inspired by Tiago Forte's Second Brain Concept

Tiago Forte's groundbreaking idea of the Second Brain has revolutionized the way we think about note-taking. Itโ€™s not just about jotting down ideas; it's about creating a powerful tool that enhances learning and creativity. Learn more about Building a Second Brain by Tiago Forte here.

๐Ÿ’ผ What Can the Second Brain AI Agent Project Do for You?

  1. Automated Indexing: No more manually sorting through files! Automatically index the content of your markdown files along with contained links, such as PDF documents, YouTube videos, and web pages.

  2. Smart Search Engine: Ask questions about your content, and our AI will provide precise answers, using the robust OpenAI Large Language Model. Itโ€™s like having a personal assistant that knows your content inside out!

  3. Effortless Integration: Whether you follow the Second Brain method or have your own unique way of note-taking, our system seamlessly integrates with your style, helping you harness the true power of your information.

  4. Enhanced Productivity: Spend less time organizing and more time innovating. By accessing your information faster and more efficiently, you can focus on what truly matters.

โœ… Who Can Benefit?

  • Professionals: Streamline your workflow and find exactly what you need in seconds.
  • Students: Make study sessions more productive by quickly accessing and understanding your notes.
  • Researchers: Dive deep into your research without getting lost in information overload.
  • Creatives: Free your creativity by organizing your thoughts and ideas effortlessly.

๐Ÿš€ Get Started Today

Don't let your notes and content overwhelm you. Make them your allies in growth, innovation, and productivity. Join us in transforming the way you manage your personal knowledge and take the leap into the future.

Details

If you take notes using markdown files like in the Second Brain method or using your own way, this project automatically indexes the content of the markdown files and the contained links (pdf documents, youtube video, web pages) and allows you to ask question about your content using the OpenAI Large Language Model.

The system is built on top of the LangChain framework and the ChromaDB vector store.

The system takes as input a directory where you store your markdown notes. For example, I take my notes with Obsidian. The system then processes any change in these files automatically with the following pipeline:

graph TD
A[Markdown files from your editor]-->B[Text files from markdown and pointers]-->C[Text Chunks]-->D[Vector Database]-->E[Second Brain AI Agent]
Loading

From a markdown file, transform_md.py extracts the text from the markdown file, then from the links inside the markdown file, it extracts pdf, url, youtube video and transforms them into text. There is some support to extract history data from the markdown files: if there is an ## History section or the file name contains History, the file is split in multiple parts according to <day> <month> <year> sections like ### 10 Sep 2023.

From these text files, transform_txt.py breaks these text files into chunks, create a vector embeddings and then stores these vector embeddings into a vector database.

The second brain agent uses the vector database to get context for asking the question to the large language model. This process is called Retrieval-augmented generation (RAG).

In reality, the process is more complex than a standard RAG. It is analyzing the question and then using a different chain according to the intent:

flowchart TD
    A[Question] --> C[/Get Intent/]
    C --> E[Summary Request] --> EA[/Extract all the chunks/] --> EB[/Summarize chunks/]
    C --> F[pdf or URL Lookup] --> FA[/Extract URL/]
    C --> D[Activity report]
    C --> G[Regular Question]
    D --> DA[/Get Period metadata/] --> DB[/Get Subject metadata/] --> DC[/Extract Question without time/] --> H[/Extract nearest documents\nfrom the vector database\nfiltered by the metadata/]
    G --> GA[/Step back question/] --> GB[/Extract nearest documents\nfrom the vector database/]
    H --> I[/Use the documents as context\nto ask the question to the LLM/]
    GB --> I
Loading

To be able to manipulate dates for activity reports. The system relies on some naming conventions. The first one is filenames containing History, Journal or StatusReport are considered journals with entries in this format: ## 02 Dec 2024 for each date. Other files can have an ## History section with entries in this format: ### 02 Dec 2024 for each date.

To classify documents, the second brain agent uses a concept of a domain per document. The domain metadata is computed for each document by removing numbers and these strings: At, Journal, Project, Notes and History. This is handy if you use a documents named like WorkoutHistory202412.md then the domain is Workout.

To know which domain to use to filter documents, the second brain agent uses a special document that can be described in the .env files in the SBA_ORG_DOC variable and is defaulting to SecondBrainOrganization.md. This document describes the mapping between domains and other concepts if you want for example to separate work and personal activities.

MCP Server

The Second Brain Agent now includes an MCP (Model Context Protocol) server that provides programmatic access to the vector database and document retrieval system. This allows other applications to integrate with your second brain without interfacing at the reasoning level.

MCP Server Features

  • Query Vector Database: Ask questions and get answers from your indexed content
  • Search Documents: Perform semantic search across your documents with metadata filtering
  • Document Management: Get document counts, metadata, and list available domains
  • Domain-based Search: Search within specific domains (work, personal, etc.)
  • Recent Documents: Retrieve recently accessed documents

Using the MCP Server

  1. Install the MCP server:

    poetry add fastmcp
  2. Run the MCP server:

    poetry run python mcp_server.py
  3. Test the server:

    poetry run python test_mcp_server.py
  4. Configure MCP clients using the mcp_config.json file:

    {
      "mcpServers": {
        "second-brain-agent": {
          "command": "/your/path/to/second-brain-agent/mcp-server.sh"
        }
      }
    }

Available MCP Tools

  • search_documents: Search for documents using semantic similarity
  • get_document_count: Get the total number of documents
  • get_domains: List all available domains
  • get_recent_documents: Get recently accessed documents

Installation

You need a Python 3 interpreter, poetry and the inotify-tools installed. All this has been tested under Fedora Linux 42 on my laptop and Ubuntu latest in the CI workflows. Let me know if it works on your system.

Get the source code:

$ git clone https://github.com/flepied/second-brain-agent.git

Copy the example .env file and edit it to suit your settings:

$ cp example.env .env

Install the dependencies using poetry:

$ poetry install

There is a bug between poetry, torch and pypi, to workaround just do:

$ poetry run pip install torch

Then to use the created virtualenv, do:

$ poetry shell

systemd services

To install systemd services to manage automatically the different scripts when the operating system starts, use the following command (need sudo access):

$ ./install-systemd-services.sh

To see the output of the md and txt services:

$ journalctl --unit=sba-md.service --user
$ journalctl --unit=sba-txt.service --user

Doing a similarity search with the vector database

$ ./similarity.py "What is LangChain?" type=notes

Searching for new connections between notes

Use the vector store to find new connections between notes:

$ ./smart_connections.py

Launching the web UI

Launch this command to access the web UI:

$ streamlit run second_brain_agent.py
  You can now view your Streamlit app in your browser.

  Local URL: http://localhost:8502
  Network URL: http://192.168.121.112:8502

Here is an example:

Screenshot

Development

Install the extra dependencies using poetry:

$ poetry install --with test

And then run the tests, like this:

# Run all tests (unit + integration)
$ poetry run pytest

# Run only unit tests (no external dependencies required)
$ poetry run pytest -m "not integration"

# Run only integration tests (requires vector database)
$ poetry run pytest -m integration

# Run only unit tests (same as above, more explicit)
$ poetry run pytest -m unit

Note: Integration tests require a running vector database and are automatically excluded during pre-commit hooks. Unit tests run without external dependencies and are suitable for CI/CD pipelines.

Full Integration Testing

For comprehensive testing of the entire system including the vector database and MCP server:

$ ./integration-test.sh

This script:

  • Sets up a complete test environment with ChromaDB
  • Processes test documents through the system
  • Runs pytest integration tests to validate MCP server functionality
  • Tests document lifecycle (create, modify, delete)
  • Provides end-to-end validation of the system

Note: This requires docker-compose/podman-compose and will create temporary test data.

pre-commit

Before submitting a PR, make sure to activate pre-commit:

poetry run pre-commit install