A Model Context Protocol (MCP) server that provides Google Scholar search capabilities through a streamable HTTP transport. This project demonstrates how to build an MCP server with custom tools and integrate it with AI models like Google's Gemini.
This project consists of two main components:
- MCP Server: Provides Google Scholar search tools via HTTP endpoints
- MCP Client: Integrates with Google Gemini AI to process queries and call tools
The server is built using the @modelcontextprotocol/sdk
and implements:
- Transport: StreamableHTTPServerTransport for HTTP-based communication
- Session Management: Supports multiple simultaneous connections with session IDs
- Tool System: Extensible tool registration and execution framework
- Error Handling: Comprehensive error responses and logging
The server currently provides one main tool:
- Description: Search Google Scholar for academic papers and research
- Parameters: Configurable search parameters (query, filters, etc.)
- Returns: Structured search results with paper details
The server uses StreamableHTTPServerTransport which supports:
- HTTP POST: For sending requests and receiving responses
- HTTP GET: For establishing Server-Sent Events (SSE) streams
- Session Management: Persistent connections with unique session IDs
- Real-time Notifications: Streaming updates via SSE
The server is now available in Smithery: Google Scholar Search Server
- Clone the repository:
git clone <repository-url>
cd google-scholar-mcp
- Install and build:
cd server
npm install
npm run build
cd client
npm install
npm run build
- Start the MCP server:
cd server
node build/index.js
The server will start on port 3000 and provide the following endpoints:
POST /mcp
- Main MCP communication endpointGET /mcp
- SSE stream endpoint for real-time updates
- Multi-session Support: Handle multiple clients simultaneously
- Graceful Shutdown: Proper cleanup on SIGINT
- Logging: Comprehensive request/response logging
- Error Handling: Structured JSON-RPC error responses
The client demonstrates how to integrate the MCP server with Google's Gemini AI model.
-
Ensure you have a valid
GEMINI_API_KEY
and provide it withexport GEMINI_API_KEY=<your-key>
-
Start the client:
cd client
node build/index.js
- The client will connect to the server and start an interactive chat loop
- Persistent Context: Maintains full conversation history across queries
- Multi-turn Conversations: Supports back-and-forth dialogue with context
- Function Call Integration: Seamlessly integrates tool calls into conversation flow
- Gemini 2.5 Flash: Uses Google's latest language model
- Tool Discovery: Automatically discovers and registers available MCP tools
- Function Calling: Converts MCP tools to Gemini function declarations
- Chat Loop: Continuous conversation interface
- History Management: View and clear conversation history
- Graceful Exit: Type 'quit' to exit cleanly
Query: Find recent papers about machine learning in healthcare
[Called tool search_google_scholar with args {"query":"machine learning healthcare recent"}]
Based on the search results, here are some recent papers about machine learning in healthcare:
1. "Deep Learning Applications in Medical Imaging" - This paper explores...
2. "Predictive Analytics in Patient Care" - Research on using ML for...
...
Query: What about specifically for diagnostic imaging?
[Called tool search_google_scholar with args {"query":"machine learning diagnostic imaging healthcare"}]
Here are papers specifically focused on diagnostic imaging applications:
...
├── server/
│ ├── src/
│ │ ├── index.ts # Express server setup
│ │ ├── server.ts # MCP server implementation
│ │ └── tools.ts # Tool definitions and handlers
├── client/
│ └── index.ts # MCP client with Gemini integration
└── package.json
- Manages MCP server lifecycle
- Handles HTTP requests and SSE streams
- Implements tool registration and execution
- Manages multiple client sessions
- Connects to MCP server via HTTP transport
- Integrates with Google Gemini AI
- Manages conversation history and context
- Handles function calling workflow
- Define your tool schema in
server/src/tools.ts
:
export const myNewTool = {
name: "my_new_tool",
description: "Description of what the tool does",
inputSchema: {
type: "object",
properties: {
// Define parameters
}
}
};
- Implement the tool handler:
export async function callMyNewTool(args: any) {
// Tool implementation
return {
content: [
{
type: "text",
text: "Tool result"
}
]
};
}
- Register the tool in the server setup
GEMINI_API_KEY
: Required for client AI integrationPORT
: Server port (defaults to 3000)
The server can be configured with different capabilities:
- Tools: Enable/disable tool support
- Logging: Configure logging levels
- Transport: Customize transport settings
The system includes comprehensive error handling:
- Server Errors: JSON-RPC compliant error responses
- Transport Errors: Connection and stream error handling
- Tool Errors: Graceful tool execution error handling
- Client Errors: AI model and function calling error handling
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
MIT License
For issues and questions:
- Check the MCP SDK documentation
- Review the Google AI SDK documentation
- Open an issue in this repository