In this tutorial, we build a compact, efficient framework that demonstrates how to convert tool documentation into standardized, callable interfaces, register those tools in a central system, and execute them as part of an automated pipeline. As we move through each stage, we create a simple converter, design mock bioinformatics tools, organize them into a registry, and benchmark both individual and multi-step pipeline executions. Through this process, we explore how structured tool interfaces and automation can streamline and modularize data workflows. Check out the FULL CODES here.
import re, json, time, random
from dataclasses import dataclass
from typing import Callable, Dict, Any, List, Tuple
@dataclass
class ToolSpec:
name: str
description: str
inputs: Dict[str, str]
outputs: Dict[str, str]
def parse_doc_to_spec(name: str, doc: str) -> ToolSpec:
desc = doc.strip().splitlines()[0].strip() if doc.strip() else name
arg_block = "n".join([l for l in doc.splitlines() if "--" in l or ":" in l])
inputs = {}
for line in arg_block.splitlines():
m = re.findall(r"(--?w[w-]*|bw+b)s*[:=]?s*(w+)?", line)
for key, typ in m:
k = key.lstrip("-")
if k and k not in inputs and k not in ["Returns","Output","Outputs"]:
inputs[k] = (typ or "str")
if not inputs: inputs = {"in": "str"}
return ToolSpec(name=name, description=desc, inputs=inputs, outputs={"out":"json"})
We start by defining the structure for our tools and writing a simple parser that converts plain documentation into a standardized tool specification. This helps us automatically extract parameters and outputs from textual descriptions. Check out the FULL CODES here.
def tool_fastqc(seq_fasta: str, min_len:int=30) -> Dict[str,Any]:
seqs = [s for s in re.split(r">[^n]*n", seq_fasta)[1:]]
lens = [len(re.sub(r"s+","",s)) for s in seqs]
q30 = sum(l>=min_len for l in lens)/max(1,len(lens))
gc = sum(c in "GCgc" for s in seqs for c in s)/max(1,sum(lens))
return {"n_seqs":len(lens),"len_mean":(sum(lens)/max(1,len(lens))),"pct_q30":q30,"gc":gc}
def tool_bowtie2_like(ref:str, reads:str, mode:str="end-to-end") -> Dict[str,Any]:
def revcomp(s):
t=str.maketrans("ACGTacgt","TGCAtgca"); return s.translate(t)[::-1]
reads_list=[r for r in re.split(r">[^n]*n", reads)[1:]]
ref_seq="".join(ref.splitlines()[1:])
hits=[]
for i,r in enumerate(reads_list):
rseq="".join(r.split())
aligned = (rseq in ref_seq) or (revcomp(rseq) in ref_seq)
hits.append({"read_id":i,"aligned":bool(aligned),"pos":ref_seq.find(rseq)})
return {"n":len(hits),"aligned":sum(h["aligned"] for h in hits),"mode":mode,"hits":hits}
def tool_bcftools_like(ref:str, alt:str, win:int=15) -> Dict[str,Any]:
ref_seq="".join(ref.splitlines()[1:]); alt_seq="".join(alt.splitlines()[1:])
n=min(len(ref_seq),len(alt_seq)); vars=[]
for i in range(n):
if ref_seq[i]!=alt_seq[i]: vars.append({"pos":i,"ref":ref_seq[i],"alt":alt_seq[i]})
return {"n_sites":n,"n_var":len(vars),"variants":vars[:win]}
FASTQC_DOC = """FastQC-like quality control for FASTA
--seq_fasta: str --min_len: int Outputs: json"""
BOWTIE_DOC = """Bowtie2-like aligner
--ref: str --reads: str --mode: str Outputs: json"""
BCF_DOC = """bcftools-like variant caller
--ref: str --alt: str --win: int Outputs: json"""
We create mock implementations of bioinformatics tools such as FastQC, Bowtie2, and Bcftools. We define their expected inputs and outputs so they can be executed consistently through a unified interface. Check out the FULL CODES here.
@dataclass
class MCPTool:
spec: ToolSpec
fn: Callable[..., Dict[str,Any]]
class MCPServer:
def __init__(self): self.tools: Dict[str,MCPTool] = {}
def register(self, name:str, doc:str, fn:Callable[...,Dict[str,Any]]):
spec = parse_doc_to_spec(name, doc); self.tools[name]=MCPTool(spec, fn)
def list_tools(self) -> List[Dict[str,Any]]:
return [dict(name=t.spec.name, description=t.spec.description, inputs=t.spec.inputs, outputs=t.spec.outputs) for t in self.tools.values()]
def call_tool(self, name:str, args:Dict[str,Any]) -> Dict[str,Any]:
if name not in self.tools: raise KeyError(f"tool {name} not found")
spec = self.tools[name].spec
kwargs={k:args.get(k) for k in spec.inputs.keys()}
return self.tools[name].fn(**kwargs)
server=MCPServer()
server.register("fastqc", FASTQC_DOC, tool_fastqc)
server.register("bowtie2", BOWTIE_DOC, tool_bowtie2_like)
server.register("bcftools", BCF_DOC, tool_bcftools_like)
Task = Tuple[str, Dict[str,Any]]
PIPELINES = {
"rnaseq_qc_align_call":[
("fastqc", {"seq_fasta":"{reads}", "min_len":30}),
("bowtie2", {"ref":"{ref}", "reads":"{reads}", "mode":"end-to-end"}),
("bcftools", {"ref":"{ref}", "alt":"{alt}", "win":15}),
]
}
def compile_pipeline(nl_request:str) -> List[Task]:
key = "rnaseq_qc_align_call" if re.search(r"rna|qc|align|variant|call", nl_request, re.I) else "rnaseq_qc_align_call"
return PIPELINES[key]
We build a lightweight server that registers tools, lists their specifications, and allows us to call them programmatically. We also define a basic pipeline structure that outlines the sequence in which tools should run. Check out the FULL CODES here.
def mk_fasta(header:str, seq:str)->str: return f">{header}n{seq}n"
random.seed(0)
REF_SEQ="".join(random.choice("ACGT") for _ in range(300))
REF = mk_fasta("ref",REF_SEQ)
READS = mk_fasta("r1", REF_SEQ[50:130]) + mk_fasta("r2","ACGT"*15) + mk_fasta("r3", REF_SEQ[180:240])
ALT = mk_fasta("alt", REF_SEQ[:150] + "T" + REF_SEQ[151:])
def run_pipeline(nl:str, ctx:Dict[str,str]) -> Dict[str,Any]:
plan=compile_pipeline(nl); results=[]; t0=time.time()
for name, arg_tpl in plan:
args={k:(v.format(**ctx) if isinstance(v,str) else v) for k,v in arg_tpl.items()}
out=server.call_tool(name, args)
results.append({"tool":name,"args":args,"output":out})
return {"request":nl,"elapsed_s":round(time.time()-t0,4),"results":results}
We prepare small synthetic FASTA data for testing and implement a function that runs the entire pipeline. Here, we dynamically pass tool parameters and execute each step in the sequence. Check out the FULL CODES here.
def bench_individual() -> List[Dict[str,Any]]:
cases=[
("fastqc", {"seq_fasta":READS,"min_len":25}),
("bowtie2", {"ref":REF,"reads":READS,"mode":"end-to-end"}),
("bcftools", {"ref":REF,"alt":ALT,"win":10}),
]
rows=[]
for name,args in cases:
t0=time.time(); ok=True; err=None; out=None
try: out=server.call_tool(name,args)
except Exception as e: ok=False; err=str(e)
rows.append({"tool":name,"ok":ok,"ms":int((time.time()-t0)*1000),"out_keys":list(out.keys()) if ok else [],"err":err})
return rows
def bench_pipeline() -> Dict[str,Any]:
t0=time.time()
res=run_pipeline("Run RNA-seq QC, align, and variant call.", {"ref":REF,"reads":READS,"alt":ALT})
ok = all(step["output"] for step in res["results"])
return {"pipeline":"rnaseq_qc_align_call","ok":ok,"ms":int((time.time()-t0)*1000),"n_steps":len(res["results"])}
print("== TOOLS =="); print(json.dumps(server.list_tools(), indent=2))
print("n== INDIVIDUAL BENCH =="); print(json.dumps(bench_individual(), indent=2))
print("n== PIPELINE BENCH =="); print(json.dumps(bench_pipeline(), indent=2))
print("n== PIPELINE RUN =="); print(json.dumps(run_pipeline("Run RNA-seq QC, align, and variant call.", {"ref":REF,"reads":READS,"alt":ALT}), indent=2))
We benchmark both individual tools and the full pipeline, capturing their outputs and performance metrics. Finally, we print the results to verify that each stage of the workflow runs successfully and integrates smoothly.
In conclusion, we develop a clear understanding of how lightweight tool conversion, registration, and orchestration can work together in a single environment. We observe how a unified interface allows us to connect multiple tools seamlessly, run them in sequence, and measure their performance. This hands-on exercise helps us appreciate how simple design principles, standardization, automation, and modularity can enhance the reproducibility and efficiency of computational workflows in any domain.
Check out the FULL CODES here. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.