Skip to content

Releases: jurasofish/mcpunk

v0.12.0

16 Mar 01:30
0c87c99
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.11.1...v0.12.0

v0.11.1

02 Mar 10:01
92130dd
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.11.0...v0.11.1

v0.11.0

02 Mar 08:53
6ac159b
Compare
Choose a tag to compare

What's Changed

  • chunk_details tool uses chunk_id only, looks up project/file itself by @jurasofish in #50
  • Update dependencies w/ minor refactors by @jurasofish in #51

Full Changelog: v0.10.0...v0.11.0

v0.10.0

02 Mar 07:12
ba032ba
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.9.0...v0.10.0

v0.9.0

02 Mar 06:32
4dc75e2
Compare
Choose a tag to compare

What's Changed

  • Large functions and large whole files are now happy, no more LLM making stuff up because it can't read large chunks: Now chunks over 10k chars (configurable) are split up, to be under 10k chars. Previously these would just be large chunks and the default_response_max_chars option (with a default of 20k chars) would mean that the LLM couldn't see them, and generally in my experience the LLM would just guess things as a result.

Full Changelog: v0.8.0...v0.9.0

v0.8.0

02 Mar 05:13
1dd6bab
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.7.0...v0.8.0

v0.7.0

26 Feb 22:21
a332051
Compare
Choose a tag to compare

basically, works with mirascope now

For example below, this is VERY scrappy just a PoC

import asyncio
import logging
from collections.abc import Sequence
from pathlib import Path
from typing import Any, cast

import dotenv
from mirascope import BaseMessageParam, BaseTool, ToolCallPart, ToolResultPart
from mirascope.core import anthropic
from mirascope.core.anthropic import AnthropicCallResponse
from mirascope.mcp.client import (  # type: ignore[attr-defined]
    StdioServerParameters,
    create_mcp_client,
)

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)


if (__dotenv_file := Path(__file__).parent.parent / ".env").exists():
    dotenv.load_dotenv(__dotenv_file)


mcp_server_params = StdioServerParameters(
    command="/Users/michael/.local/bin/uvx",
    # args=["mcpunk"],
    args=["--from", "/Users/michael/git/mcpunk", "--no-cache", "mcpunk"],
    env=None,
)


async def llm_call_with_mcp_tool_loop(
    system_prompt: str,
    initial_query: str,
    repo: Path,
    max_iter: int = 100,
) -> str:
    async with create_mcp_client(mcp_server_params) as mcp_client:
        _tools = await mcp_client.list_tools()
        tools: Sequence[type[BaseTool]] = cast(Sequence[type[BaseTool]], _tools)
        print("tools", tools)

        @anthropic.call(  # type: ignore[misc,call-overload]
            "claude-3-5-sonnet-latest",
            tools=tools,
        )
        def do_llm_call(ctxt_: list[BaseMessageParam]) -> list[BaseMessageParam]:
            return ctxt_

        ctxt: list[BaseMessageParam] = [
            BaseMessageParam(role="system", content=system_prompt),
            BaseMessageParam(role="user", content=initial_query + f"\nproject: {repo!s}"),
        ]
        for _i in range(max_iter):
            resp: AnthropicCallResponse = cast(AnthropicCallResponse, do_llm_call(ctxt_=ctxt))
            print(f"LLM Response {resp.message_param}")

            # Now put the LLMs response back into the context. Like a chat! The LLM
            # needs to know what it previously responded with.
            for resp_block in resp.response.content:
                if resp_block.type == "text":
                    ctxt.append(BaseMessageParam(role="assistant", content=resp_block.text))
                elif resp_block.type == "tool_use":
                    ctxt.append(
                        BaseMessageParam(
                            role="assistant",
                            content=[
                                ToolCallPart(
                                    type="tool_call",
                                    name=resp_block.name,
                                    args=cast(dict[str, Any], resp_block.input),
                                    id=resp_block.id,
                                )
                            ],
                        )
                    )

            # If the machine requested tool uses, then use em and slap them back
            # in the context.
            if resp.tools:
                for tool in resp.tools:
                    try:
                        call_result = await tool.call()
                        call_result_str = str(call_result)
                        print(f"Tool response {call_result_str}")
                        ctxt.append(
                            BaseMessageParam(
                                role="user",
                                content=[
                                    ToolResultPart(
                                        type="tool_result",
                                        name=tool.tool_call.name,
                                        content=str(call_result_str),
                                        id=tool.tool_call.id,
                                        is_error=False,
                                    )
                                ],
                            )
                        )
                    except Exception as e:
                        # Mirascope's MCP client tends to provide very poor errors here.
                        logger.exception("There was an error calling the tool")
                        ctxt.append(
                            BaseMessageParam(
                                role="user",
                                content=[
                                    ToolResultPart(
                                        type="tool_result",
                                        name=tool.tool_call.name,
                                        content=str(e),
                                        id=tool.tool_call.id,
                                        is_error=True,
                                    )
                                ],
                            )
                        )
            else:
                # If no tools then we assume this is the final response message.
                final_response = resp.content
                print(final_response)
                return final_response
        print("Too many iterations!")
        return "Too many iterations!"


def main() -> None:
    asyncio.run(
        llm_call_with_mcp_tool_loop(
            "You are a code reviewer who is methodical and thorough. "
            "You MUST use ALL available tools extensively to analyze code. "
            "Your approach should be systematic: configure project, list files, "
            "check diffs, then examine each change in detail using multiple tool calls. "
            "A proper review requires at least 10-15 tool calls - anything less is insufficient. "
            "Explore: changed files, their dependencies, imports, and affected functionality.",
            # ...
            "Please configure the specified project and review "
            "the diff with the main branch. Use AT LEAST 10 tool calls to explore "
            "the codebase properly. Start with configuration, then file listing, then "
            "diffs, then detailed examination of each change. Your final response should be "
            "a comprehensive PR review after thorough tool-based exploration.",
            Path("~/git/mcpunk").expanduser().absolute(),
        )
    )


if __name__ == "__main__":
    main()

What's Changed

Full Changelog: v0.6.1...v0.7.0

v0.6.1

23 Feb 21:27
8e9cb90
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.6.0...v0.6.1

v0.6.0

23 Feb 03:31
4c8031d
Compare
Choose a tag to compare

What's Changed

Big change is that responses are generally now plain text, and use fewer chars than before. And this also makes them easier to read if you're looking at the tool responses in e.g. claude desktop or in the log file (~/.mcpunk/mcpunk.log by default)

Full Changelog: v0.5.1...v0.6.0

v0.5.1

23 Feb 01:11
d29db74
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.5.0...v0.5.1