OpenAEO Dev Log #2: Implementing the Core and Infrastructure

In my first post, I mapped out the high-level architecture and the "Why" behind OpenAEO. Today, I moved from theory to implementation. I focused on building the "Brain" (Core Logic) and the "Hands" (Adapters) of the system.

Here is exactly how I built it and the specific design decisions I made.

1. Defining the Core Logic (The Brain)

I started with the citationParser.ts. I wanted this to be a "pure function." In architecture terms, a pure function is predictable: if you give it the same input, it always gives the same output, and it has no "side effects" (it doesn't talk to the internet or a database).

File: src/core/citationParser.ts
I designed this to check for two types of "wins":

  • The Hard Win: Your URL is explicitly in the citations list.
  • The Soft Win: Your brand name is mentioned in the AI's answer text, even if the link is missing.

I also made sure to capture competitorUrls. As a CTO, I know that data is only useful if it’s actionable. Knowing I lost is okay; knowing who I lost to is what allows me to fix the problem later.

2. Proving it Works (Unit Testing)

I refused to write any API code until I knew my logic was bulletproof. I used Vitest to run real-world scenarios.

File: tests/citationParser.test.ts
Instead of using my own startup examples yet, I used Notion as a test case. Why? Because Notion has complex deep links and is frequently cited. By testing against a real-world titan, I confirmed that my includes() logic handles subdomains (like www.notion.so) and paths correctly.

3. Building the Adapters (The Infrastructure)

Once the brain was tested, I built the adapters to talk to the real world.

The Perplexity Adapter

File: src/adapters/PerplexityApi.ts
I chose the openai SDK because Perplexity’s API is compatible with it.

  • Architectural Decision: I wrapped the API call in a try/catch block. When building for production, you must assume the external API will fail. I also cast the response to any specifically to grab the citations field, which isn't part of the standard OpenAI response type.

The Storage Adapter

File: src/adapters/JsonStorage.ts
I opted for local JSON storage for the Open Core version.

  • Architectural Decision: I used os.homedir() to store data in a hidden folder (.open-aeo). This keeps the user's workspace clean. I also implemented ensureDirectory() to make sure the app doesn't crash if the folder doesn't exist on the first run.

4. The Bridge: MCP Integration

Finally, I started the Presentation Layer. This is what turns my raw TypeScript code into a "Tool" that an AI like Claude can actually use.

Files: src/mcp/tools.ts & src/mcp/server.ts

  • The Tool Manual: I used inputSchema to describe exactly what aeo_check needs. This is like writing documentation that only an AI reads.
  • The Transport: I used StdioServerTransport. This is a critical architectural choice: it means the AI and my code talk over the standard "text stream" of the computer.

🛠 Lessons Learned & Technical Nuances

  • The .js Import Mystery: I had to be very careful with my imports. Because I'm using NodeNext for modern ESM support, I have to write import { ... } from './types.js'; even though the file is a .ts file. If I forget this, the compiled code will fail in Node.
  • The Log Pollution Trap: I learned that in an MCP server, you cannot use console.log for debugging. If you do, that text gets sent to the AI as part of the protocol and breaks the connection. I am strictly using console.error for all my internal logging.

What’s Next?

The system is now "wired." The brain knows how to think, the hands know how to fetch and save. The final step is the Entry Point—writing the code that initializes these pieces with my API key and hits the "On" switch.

References:

  • Vitest Docs: For understanding modern testing patterns.
  • Node.js fs/promises: For non-blocking file operations.
  • MCP SDK Type definitions: For the CallToolRequestSchema.