Scientists Leverage Claude to Speed Up Research

Researchers are developing custom Claude-powered systems that dramatically accelerate scientific workflows, from analyzing massive genomic datasets in minutes instead of months to navigating hundreds of biomedical tools automatically. These AI collaborations are enabling scientists to eliminate research bottlenecks and discover patterns that human experts might miss.

anthropic Jan 15, 2026

In October, Anthropic introduced Claude for Life Sciences-a collection of connectors and capabilities designed to enhance Claude's effectiveness as a scientific partner. Subsequently, Anthropic has made substantial improvements to position Claude as the leading model for scientific applications, with Opus 4.5 demonstrating notable enhancements in figure analysis, computational biology, and protein comprehension assessments. These developments stem from collaborations with academic and industry researchers and demonstrate Anthropic's dedication to understanding scientists' AI usage for advancing progress.

Anthropic has been collaborating with scientists via their AI for Science initiative, offering complimentary API credits to prominent researchers engaged in significant scientific endeavors globally.

These researchers have created specialized systems utilizing Claude beyond conventional applications like literature searches or programming help. Within the laboratories studied, Claude functions as a partner throughout the complete research cycle: facilitating experiment selection more efficiently and economically, employing various tools to condense typically months-long projects into hours, and identifying patterns within extensive datasets that might escape human detection. Frequently, it removes obstacles by managing tasks demanding extensive expertise that were previously unscalable; occasionally, it allows completely novel research methodologies compared to conventional approaches.

Claude is starting to transform scientists' working methods-directing them toward innovative scientific discoveries and understanding.

Biomni: A Versatile Biomedical Platform with Extensive Tool and Database Integration

Biological research faces a challenge with tool fragmentation: numerous databases, software applications, and protocols exist, requiring researchers to invest considerable time choosing and learning different systems. Ideally, this time would be allocated to conducting experiments, analyzing results, or exploring new projects.

Biomni, an AI agent system developed at Stanford University, integrates numerous tools, applications, and datasets within one framework that a Claude-enabled agent can operate. Researchers submit natural language requests; Biomni selects suitable resources automatically. The system can develop hypotheses, create experimental procedures, and conduct analyses spanning over 25 biological disciplines.

Take genome-wide association studies (GWAS) as an illustration-investigations seeking genetic variations connected to particular characteristics or conditions. Perfect pitch demonstrates strong genetic influence. Scientists examine large populations-individuals capable of identifying musical notes without reference tones versus those with poor singing abilities-scanning genomes for genetic differences appearing more frequently in either group.

Genome scanning itself is comparatively straightforward. The data analysis and interpretation consume time: genomic information arrives in complex formats requiring thorough cleaning; scientists must account for confounders and handle incomplete data; after finding significant results, understanding their implications requires work-determining nearby genes (as GWAS identifies genomic locations only), relevant cell types for expression, potentially affected biological processes, and more. Individual steps may require various tools, file types, and extensive manual choices. The process proves laborious. Single GWAS studies can require months. However, an initial Biomni test completed this in 20 minutes.

This may appear unrealistic-how reliable is such AI-based analysis? The Biomni developers have confirmed the system via multiple case studies across various domains. In one instance, Biomni created a molecular cloning procedure; blind assessment showed the protocol matched that of an experienced postdoctoral researcher with over five years' experience. Another case involved Biomni processing data from more than 450 wearable device files from 30 individuals (combining continuous glucose tracking, temperature, and activity measurements) in merely 35 minutes-work projected to require three weeks for human specialists. Additionally, Biomni examined gene expression information from more than 336,000 single cells from human embryonic samples. The platform verified known regulatory connections while discovering previously unidentified transcription factors-gene-controlling proteins-not formerly associated with human embryonic growth.

Biomni isn't flawless, hence its safeguards for detecting when Claude strays. It cannot yet accomplish everything immediately. Where limitations exist, specialists can develop skills-instructing the agent on expert problem-solving methods instead of allowing independent solutions. When collaborating with the Undiagnosed Diseases Network regarding rare disease identification, the developers discovered Claude's standard methodology differed significantly from clinical practices. They consulted specialists, systematically recorded their diagnostic approach, and trained Claude accordingly. With this newly acquired, formerly implicit expertise, the agent succeeded.

Biomni illustrates one strategy: an all-purpose platform consolidating numerous tools. Meanwhile, other laboratories develop specialized systems-addressing particular obstacles within their research processes.

Cheeseman Lab: Streamlining Analysis of Extensive Gene Deletion Studies

Scientists studying gene function often eliminate genes from cells or organisms to observe resulting disruptions. CRISPR gene-editing technology, introduced approximately in 2012, enabled accurate large-scale implementation. However, CRISPR's potential remained constrained: laboratories produced vastly more information than they could process.

This challenge confronts Iain Cheeseman's laboratory at the Whitehead Institute and MIT's Biology Department. Through CRISPR, they eliminate thousands of distinct genes throughout millions of human cells, photographing individual cells to identify changes. Image patterns show that genes performing comparable functions typically cause similar cellular damage when eliminated. Automated software can recognize these patterns and categorize genes-Cheeseman's team developed Brieflow (referencing brie cheese) for this purpose.

However, understanding these gene groupings' significance-their clustering reasons, potential commonalities, whether representing established biological connections or novel findings-still demands expert literature examination, gene after gene. The process proves time-consuming. Individual screens may generate hundreds of clusters, with most remaining unexplored due to insufficient time, resources, or comprehensive cellular function knowledge.

Cheeseman personally handled interpretation for years. While he can identify approximately 5,000 gene functions from memory, thorough data analysis demands hundreds of hours. To expedite this, graduate student Matteo Di Bernardo developed an automated system replicating Cheeseman's methodology. Through detailed collaboration with Cheeseman to comprehend his interpretation process-utilized data sources, sought patterns, interesting finding criteria-they created MozzareLLM, a Claude-based system (continuing the cheese naming pattern).

The system analyzes gene clusters performing expert-level work: determining shared biological functions, distinguishing well-characterized from poorly researched genes, and identifying promising follow-up candidates. This considerably speeds their research while facilitating crucial biological findings. Cheeseman reports Claude regularly identifies overlooked elements. "Each review reveals something I missed! These represent verifiable, comprehensible discoveries," he notes.

MozzareLLM's value extends beyond single applications: it combines varied information and applies scientific reasoning. Significantly, it assigns confidence ratings to conclusions, which Cheeseman considers essential for resource allocation decisions regarding follow-up investigations.

During MozzareLLM development, Di Bernardo evaluated various AI systems. Claude surpassed competitors-successfully recognizing an RNA modification mechanism that alternatives incorrectly classified as random variation.

Cheeseman and Di Bernardo plan to publish Claude-annotated datasets publicly-enabling specialists from different domains to investigate clusters beyond their team's capacity. Mitochondrial experts, for example, could examine mitochondrial clusters identified but uninvestigated by Cheeseman's group. As additional laboratories implement MozzareLLM for CRISPR studies, it could expedite understanding and confirmation of genes with unclear functions persisting for years.

Lundberg Lab: Evaluating AI-Generated Hypotheses for Gene Selection

The Cheeseman laboratory employs optical pooled screening-allowing simultaneous knockout of numerous genes per experiment. Their challenge involves interpretation. However, pooled methods don't suit all cell varieties. Certain laboratories, including Stanford's Lundberg Lab, conduct smaller, targeted screens, facing earlier obstacles: selecting initial gene targets.

With individual focused screens potentially exceeding $20,000 and expenses scaling with size, laboratories usually select several hundred genes deemed most probable for involvement in specific conditions. Traditional approaches involve graduate student and postdoc teams collaborating via Google spreadsheets, incrementally adding candidate genes with brief explanations or literature references. This represents informed speculation, drawing from literature analysis, knowledge, and instinct, yet limited by human capacity. It's also imperfect, relying on existing scientific publications and participants' recollections.

The Lundberg Lab employs Claude to reverse this method. Rather than asking "what predictions arise from existing research?", their framework asks "what deserves investigation, considering molecular characteristics?"

The researchers constructed a comprehensive cellular molecule map-proteins, RNA, DNA-and their interrelationships. They documented protein interactions, gene-product relationships, and molecular structural similarities. They can specify objectives-genes potentially controlling specific cellular components or functions-and Claude explores this map, selecting candidate genes through biological characteristics and connections.

The Lundberg team currently tests this methodology's effectiveness. They required a minimally researched subject (well-studied areas might allow Claude to access known results). They selected primary cilia: cellular antenna-like structures poorly understood yet connected to multiple developmental and neurological conditions. They'll conduct comprehensive genome screening to determine actual cilia formation influences, establishing factual baselines.

The experiment compares human specialists against Claude. Humans employ spreadsheet methods for predictions. Claude produces recommendations via molecular relationship mapping. If Claude identifies (theoretically) 150 from 200, while humans find 80 from 200, the method proves superior. Even with comparable discovery rates, Claude likely operates faster, enhancing research efficiency.

Successful implementation could establish this as standard preliminary procedure for targeted perturbation screening. Rather than relying on intuition or current prevalent brute-force techniques, laboratories could make educated gene targeting decisions-achieving improved outcomes without requiring comprehensive genome screening infrastructure.

Future Outlook

These systems remain imperfect. Yet they demonstrate how scientists have quickly integrated AI as capable research collaborators exceeding basic functions-progressively accelerating, occasionally substituting, numerous research components.

Discussions with these laboratories revealed recurring observations: their developed tools' effectiveness continues advancing alongside AI improvements. Every model update delivers observable enhancements. While models two years prior handled only coding or paper summarization, stronger agents have started, gradually, replicating the actual research those publications detail.

As technologies progress and AI systems become increasingly sophisticated, Anthropic continues observing and understanding scientific discovery's concurrent evolution.

For additional information regarding enhanced Claude for Life Sciences features, visit the solutions page, and access tutorials. Anthropic continues accepting submissions for their AI for Science initiative. Submissions undergo evaluation by Anthropic's team, incorporating field-specific experts.