
AI
Designing AI for Researchers: Lessons from 6 Months of Remy
Anthony Lam
February 25, 2026
Market Research
Articles

AI
Designing AI for Researchers: Lessons from 6 Months of Remy
Anthony Lam
February 25, 2026
Market Research
Articles

AI
64 AI Market Research Prompts for Data Analysis
February 25, 2026
Market Research
Articles

AI
64 AI Market Research Prompts for Data Analysis
February 25, 2026
Market Research
Articles

Advanced Research
9 Essential Questions for Evaluating Employee Satisfaction Software
February 20, 2026
Employee Research
Articles

Advanced Research
9 Essential Questions for Evaluating Employee Satisfaction Software
February 20, 2026
Employee Research
Articles

Advanced Research
How to Evaluate Market Research Vendors for Global Reach
Team Remesh
February 10, 2026
Market Research
Articles

Advanced Research
How to Evaluate Market Research Vendors for Global Reach
Team Remesh
February 10, 2026
Market Research
Articles

Advanced Research
3 Early-Stage Research Methods to Gather Consumer Insights
Team Remesh
January 27, 2026
Market Research
Articles

Advanced Research
3 Early-Stage Research Methods to Gather Consumer Insights
Team Remesh
January 27, 2026
Market Research
Articles
.avif)
Advanced Research
Why Agencies Should Embrace AI Tools for Market Research
Team Remesh
January 26, 2026
Articles
.avif)
Advanced Research
Why Agencies Should Embrace AI Tools for Market Research
Team Remesh
January 26, 2026
Articles

Advanced Research
The Top Market Research Companies for the CPG Industry
Team Remesh
January 20, 2026
Market Research
Articles

Advanced Research
The Top Market Research Companies for the CPG Industry
Team Remesh
January 20, 2026
Market Research
Articles

Advanced Research
The Most Cutting-Edge Consumer Insights Software of 2026
Team Remesh
January 5, 2026
Market Research
Articles

Advanced Research
The Most Cutting-Edge Consumer Insights Software of 2026
Team Remesh
January 5, 2026
Market Research
Articles

Research 101
Introducing: Poll Comparison - Streamline Concept Testing and Make Better Decisions Faster
Emmet Hennessy
November 24, 2025
Market Research
Articles

Research 101
Introducing: Poll Comparison - Streamline Concept Testing and Make Better Decisions Faster
Emmet Hennessy
November 24, 2025
Market Research
Articles
Designing AI for Researchers: Lessons from 6 Months of Remy
After six months of real-world use, we analyzed dozens of Remy sessions to understand how researchers actually work with an AI partner. From distinct user archetypes to the “frame-and-prove” rhythm that drives analysis, here’s what we’ve learned—and how it’s shaping what we build next.

Over the past six months, we’ve watched researchers put our agentic AI research partner, Remy, to work on real projects: the kind where a brand team needs a concept recommendation by Friday, or an employee experience lead needs to walk into a leadership meeting with evidence that a new initiative is landing.
We wanted to understand what was happening inside those sessions. Not just how often people used Remy, but how they used it: what they asked, how their questions evolved over a thread, and what that revealed about how researchers actually think when they have a conversational AI partner alongside them.
So we analyzed thousands of interactions, spanning dozens of active workspaces. We coded queries, identified underlying analytical needs, read threads end to end, and clustered behavioral patterns to see whether distinct usage profiles emerged.
They did. Here are three things we learned.
1. There Is No “Average” User
When we looked at aggregate patterns, the story was clean: summary requests were the most common query type, followed by quote retrieval and theme extraction. But when we looked at how individual teams actually use Remy, those averages collapsed.
Some teams use Remy almost exclusively for summarization; structured, templated, efficient. Others barely summarize at all, instead running deep multi-turn threads that systematically harvest quotes and proof points over 15+ exchanges. Still others treat Remy as a methodology auditor, probing sample sizes and validating whether outputs match what the data actually supports.
We identified several distinct behavioral archetypes, but a few stand out:
Concept & Creative Evaluators (the largest group) use Remy to evaluate stimuli like ads, messaging territories, or product concepts. Each thread is a “work unit” for a single concept, following a summarize-compare-refine loop.
Evidence Miners run the deepest threads in our dataset. Their sessions are built around harvesting quotes with specific criteria (segment filters, agreement thresholds) to build evidence for stakeholder presentations.
Insight Synthesizers work across the broadest range of query types, extracting themes, distilling key insights, and testing whether findings generalize. Their sessions are the most analytically diverse.
Data Interrogators are the rigor seekers, asking what data is available and whether outputs are accurate before trusting any conclusion.
The variation isn’t noise. A summary request from a Concept Evaluator means “orient me to this stimulus.” The same request from someone building a strategy deck means “give me the baseline before I ask for recommendations.” Same query, different job.
Remy User Tip: The way you approach Remy should reflect the job you’re doing. Evaluating concepts? Treat each thread as one concept. Building evidence? Plan for longer threads and specify your selection criteria. Remy adapts to intent: the clearer yours is, the better the results.
2. Researchers “Frame and Prove”, Over and Over
When we examined how people asked their questions, a consistent rhythm emerged. Nearly half of all question behavior fell into two complementary modes: broad framing (“What’s going on with this topic?”) and targeted evidence extraction (“Show me quotes that support this”).
We call this the frame-and-prove rhythm. A researcher orients to the data and establishes a baseline; then shifts into extraction, pulling specific quotes, counts, or distributions. Then they return to framing, re-scoping or pivoting to a new angle. The cycle repeats.
It’s an oscillation more than a linear funnel. And it’s remarkably consistent across user types, even though different archetypes spend different proportions of time in each mode.
We also found that more than half of all queries occur deep in a conversation (at message four or later within the same thread). This means Remy is being used as a dialogue partner, not a search engine. The most valuable analysis happens not in the first question, but in the iterative refinement that follows.
Remy User Tip: Your first question is a starting point, not the destination. After an initial summary or theme extraction, push Remy with specifics: “Which segments feel differently?” “What quotes support that?” “Which concept has the strongest signal?” The researchers getting the most from Remy are the ones who lean into the conversation.
3. Evidence Portability Is the Biggest Unmet Need
If one finding appeared consistently across every layer of our analysis, it’s this: researchers spend a significant portion of their Remy time assembling evidence, and the tools for making that evidence portable aren’t yet where they need to be.
Quote retrieval was the second most common query type. Evidence assembly (harvesting proof points to substantiate a narrative) ranked among the top three underlying needs. And at the behavioral level, some teams build entire multi-turn threads around this single job.
Once proof points are assembled inside a thread, getting them out into a deck, a report, or a brief still requires manual effort. The thread is the workspace, but the deliverable lives somewhere else.
This isn’t limited to one user type. Concept Evaluators need side-by-side comparisons. Synthesizers want themes linked to supporting verbatims. Strategy Builders want recommendations that carry their evidence with them. The desire to move from “insight in a thread” to “evidence in a deliverable” is universal.
Remy User Tip: Be explicit about what you need from quotes: segments, thresholds, tone. Ask Remy to format outputs closer to your final deliverable (bullets, not paragraphs). Dedicate specific threads to evidence harvesting so proof points are consolidated, not scattered.
What We’re Most Excited About Next
These learnings are directly shaping what we’re building.
Enhancing evidence and making it portable. We’re investing in storytelling outputs like charts and exports so the quotes, comparisons, and data points surfaced in a thread can move into presentation-ready formats with attribution intact.
Extending the conversation. Remy’s strength today is analysis. We’re extending capabilities across more of the research lifecycle, so the conversation that starts with “what does this data say?” can connect to “what should we ask next?” and “how do we present this?”
Coming Soon: Best Practice Guide to Remy Analysis
These findings have given us a sharper picture of what separates a good Remy session from a great one. We’re developing an updated Best Practice Guide that translates these patterns into practical guidance: recommended workflows, example prompts, thread strategy, and tips for each analysis style. Whether you’re running complex multi-segment evaluations or summarizing a study for the first time, the guide is designed to help you get to stronger outputs faster. Look for it in the coming weeks.
At Remesh, we believe the future of qualitative research is conversational, evidence-grounded, and researcher-led. Remy is our expression of that belief: purpose-built for the way researchers actually work, informed by what we’re learning from how they use it.
Want to see Remy in action? Watch the live Remy demo here.
-
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
-
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
-
More



Stay up-to date.
Stay ahead of the curve. Get it all. Or get what suits you. Our 101 material is great if you’re used to working with an agency. Are you a seasoned pro? Sign up to receive just our advanced materials.

.png)

.png)

.png)
.png)





