How to Turn Creator Ad Performance Screenshots Into a Keep-Cut-Retest Plan With Gemini
Use screenshots of creator ad metrics, comments, and thumbnails to turn messy campaign evidence into a cleaner keep-cut-retest plan for the next batch.
The problem this solves and who it is for
This workflow is for UGC managers, performance marketers, and founders who already have campaign evidence but need a faster way to decide what to keep, what to cut, and what deserves one more test. The raw inputs often live as screenshots from ad dashboards, comment threads, and thumbnail comparisons. They are real, but they are messy.
A keep-cut-retest plan is more useful than a vague performance recap. It turns evidence into the next production decision: keep this hook, cut this framing, retest this creator with a different opener, or stop using this proof angle.
Prerequisites
- A Gemini account
- Screenshots of the performance evidence you want to review
- A short note that explains the objective, platform, spend window, and which metrics matter most
- Optional: a simple typed table with the exact numbers if the screenshots are dense or small
How to capture or gather the source material
- Choose one testing window or campaign slice. Do not mix unrelated time periods into one review.
- Capture screenshots that clearly show the metric labels, creator names, hook variants, or comments you want analyzed.
- If some metrics are hard to read, create a short table in plain text with the essential numbers. AI handles the decision memo better when the key numbers are explicit.
- Add one note with your evaluation frame, such as
prioritize click-through and hold rateorprioritize comment quality and cost per acquisition.
Step-by-step workflow
- Assemble the evidence first. Upload the screenshots and paste the short metrics note into one Gemini chat.
- Ask for evidence grouping first. Have Gemini separate hook evidence, creator evidence, format evidence, and comment evidence.
- Ask for a keep-cut-retest memo second. Make the output decision-oriented rather than descriptive.
- Add one constraint line. For example: limited budget, one creator slot left, or only vertical 15-second assets next round.
- Use the memo to plan the next batch. The goal is not to admire last week’s data. It is to decide the next creative move.
- Store the final memo with the screenshots. That gives future reviewers the evidence and the decision in one place.
Tool-specific instructions
Primary recommendation: Gemini
Gemini is a strong fit because Google documents file and image analysis in Gemini Apps. That makes it practical for screenshot-heavy creative review where the raw evidence is visual and fast to upload.
Practical setup:
- Use one chat per review window.
- Upload metric screenshots, comment screenshots, and thumbnail comparisons together.
- Paste a short note with the success metric and campaign context.
- Ask for evidence grouping first and decisions second.
- If the numbers are small or blurry, paste the key values in plain text instead of forcing the model to decode them from tiny screenshots.
Alternative: ChatGPT
ChatGPT is a good alternative if you want to combine screenshots with a typed metrics table or keep the review inside a Project. It is especially useful if your team wants to turn the memo into a reusable template later.
Alternative: Claude
Claude is useful when you want a more deliberate memo after you already assembled the data into a document or PDF. It works well for turning the evidence into a written decision record for the creative team.
Copy and paste prompt blocks tailored to the workflow
Gemini evidence-grouping prompt
{
"role": "creative performance analyst",
"task": "group creator ad evidence from screenshots and notes",
"goal": "organize mixed campaign evidence before drafting next-step decisions",
"instructions": [
"Use the uploaded screenshots and pasted metrics note only.",
"Group evidence into hook performance, creator performance, format or asset pattern, and audience response or comments.",
"Do not invent missing metrics.",
"Flag anything unreadable or uncertain."
],
"output_format": {
"hook_evidence": [],
"creator_evidence": [],
"format_evidence": [],
"comment_or_audience_signals": [],
"uncertain_or_missing_data": []
}
}
Gemini keep-cut-retest prompt
{
"role": "performance strategy lead",
"task": "create a keep-cut-retest plan",
"goal": "turn grouped evidence into the next production decision",
"instructions": [
"Use the evidence summary already created in this chat.",
"Create three sections: keep, cut, retest.",
"For each section, explain the practical implication for the next batch of creator assets.",
"Keep the recommendations specific and tied to available evidence."
],
"output_format": {
"keep": [],
"cut": [],
"retest": [],
"next_batch_notes": []
}
}
Quality checks
- Make sure each recommendation traces back to visible evidence or the typed metrics note.
- Keep unreadable or uncertain data labeled as uncertain.
- Turn the memo into next-step decisions, not just descriptions of performance.
- Use the same success metric throughout the review instead of changing the standard midstream.
Common failure modes and fixes
Failure mode: The screenshots are too dense.
Fix: Paste a small summary table with the key numbers before asking for the memo.
Failure mode: The memo overstates confidence.
Fix: Tell Gemini to mark uncertain or partial evidence clearly.
Failure mode: Retest becomes a dumping ground.
Fix: Limit retests to specific hypotheses, such as a new hook on the same creator or the same hook on a different creator.
Failure mode: The decision memo ignores comments.
Fix: Upload one or two comment screenshots and ask for an audience-signal section explicitly.
Sources Checked
- https://support.google.com/gemini/answer/14903178?co=GENIE.Platform%3DDesktop&hl=en (accessed 2026-03-25)
- https://support.google.com/gemini/answer/13275745?co=GENIE.Platform%3DDesktop&hl=en (accessed 2026-03-25)
- https://help.openai.com/en/articles/8400551-chatgpt-image-inputs-faq (accessed 2026-03-25)
- https://help.openai.com/en/articles/10169521-projects-in-chatgpt (accessed 2026-03-25)
- https://support.claude.com/en/articles/8241126-uploading-files-to-claude (accessed 2026-03-25)
Quarterly Refresh Flag
Review by 2026-06-23 to confirm the live product interfaces and supported file, image, audio, project, or notebook behaviors still match the current tools.
Related Workflows
How to Turn Event Photos and a Voice Memo Into a Same-Day Recap Carousel With AI
Capture a clean photo set, pair it with a short recap memo, and turn it into a same-day carousel outline and posting checklist while the event is still fresh.
How to Turn Forum and Reddit Screenshots Into an Audience-Language Brief With AI
Collect screenshots from forums, Reddit, or app reviews and turn them into a brief that captures real reader language, objections, and question patterns.
How to Build a Messaging-Aware Content Brief From Client and Competitor Pages With AI
Start with a client site, product pages, and a few competitor pages, then turn them into a messaging-aware brief before you draft the actual content.