Creatives & Content Pros5 min read

How to Turn Creator Ad Performance Screenshots Into a Keep-Cut-Retest Plan With Gemini

Use screenshots of creator ad metrics, comments, and thumbnails to turn messy campaign evidence into a cleaner keep-cut-retest plan for the next batch.

geminiugcad performancecreative testingscreenshotscreator workflow

The problem this solves and who it is for

This workflow is for UGC managers, performance marketers, and founders who already have campaign evidence but need a faster way to decide what to keep, what to cut, and what deserves one more test. The raw inputs often live as screenshots from ad dashboards, comment threads, and thumbnail comparisons. They are real, but they are messy.

A keep-cut-retest plan is more useful than a vague performance recap. It turns evidence into the next production decision: keep this hook, cut this framing, retest this creator with a different opener, or stop using this proof angle.

Prerequisites

  • A Gemini account
  • Screenshots of the performance evidence you want to review
  • A short note that explains the objective, platform, spend window, and which metrics matter most
  • Optional: a simple typed table with the exact numbers if the screenshots are dense or small

How to capture or gather the source material

  1. Choose one testing window or campaign slice. Do not mix unrelated time periods into one review.
  2. Capture screenshots that clearly show the metric labels, creator names, hook variants, or comments you want analyzed.
  3. If some metrics are hard to read, create a short table in plain text with the essential numbers. AI handles the decision memo better when the key numbers are explicit.
  4. Add one note with your evaluation frame, such as prioritize click-through and hold rate or prioritize comment quality and cost per acquisition.

Step-by-step workflow

  1. Assemble the evidence first. Upload the screenshots and paste the short metrics note into one Gemini chat.
  2. Ask for evidence grouping first. Have Gemini separate hook evidence, creator evidence, format evidence, and comment evidence.
  3. Ask for a keep-cut-retest memo second. Make the output decision-oriented rather than descriptive.
  4. Add one constraint line. For example: limited budget, one creator slot left, or only vertical 15-second assets next round.
  5. Use the memo to plan the next batch. The goal is not to admire last week’s data. It is to decide the next creative move.
  6. Store the final memo with the screenshots. That gives future reviewers the evidence and the decision in one place.

Tool-specific instructions

Primary recommendation: Gemini

Gemini is a strong fit because Google documents file and image analysis in Gemini Apps. That makes it practical for screenshot-heavy creative review where the raw evidence is visual and fast to upload.

Practical setup:

  • Use one chat per review window.
  • Upload metric screenshots, comment screenshots, and thumbnail comparisons together.
  • Paste a short note with the success metric and campaign context.
  • Ask for evidence grouping first and decisions second.
  • If the numbers are small or blurry, paste the key values in plain text instead of forcing the model to decode them from tiny screenshots.

Alternative: ChatGPT

ChatGPT is a good alternative if you want to combine screenshots with a typed metrics table or keep the review inside a Project. It is especially useful if your team wants to turn the memo into a reusable template later.

Alternative: Claude

Claude is useful when you want a more deliberate memo after you already assembled the data into a document or PDF. It works well for turning the evidence into a written decision record for the creative team.

Copy and paste prompt blocks tailored to the workflow

Gemini evidence-grouping prompt

{
  "role": "creative performance analyst",
  "task": "group creator ad evidence from screenshots and notes",
  "goal": "organize mixed campaign evidence before drafting next-step decisions",
  "instructions": [
    "Use the uploaded screenshots and pasted metrics note only.",
    "Group evidence into hook performance, creator performance, format or asset pattern, and audience response or comments.",
    "Do not invent missing metrics.",
    "Flag anything unreadable or uncertain."
  ],
  "output_format": {
    "hook_evidence": [],
    "creator_evidence": [],
    "format_evidence": [],
    "comment_or_audience_signals": [],
    "uncertain_or_missing_data": []
  }
}

Gemini keep-cut-retest prompt

{
  "role": "performance strategy lead",
  "task": "create a keep-cut-retest plan",
  "goal": "turn grouped evidence into the next production decision",
  "instructions": [
    "Use the evidence summary already created in this chat.",
    "Create three sections: keep, cut, retest.",
    "For each section, explain the practical implication for the next batch of creator assets.",
    "Keep the recommendations specific and tied to available evidence."
  ],
  "output_format": {
    "keep": [],
    "cut": [],
    "retest": [],
    "next_batch_notes": []
  }
}

Quality checks

  • Make sure each recommendation traces back to visible evidence or the typed metrics note.
  • Keep unreadable or uncertain data labeled as uncertain.
  • Turn the memo into next-step decisions, not just descriptions of performance.
  • Use the same success metric throughout the review instead of changing the standard midstream.

Common failure modes and fixes

Failure mode: The screenshots are too dense.
Fix: Paste a small summary table with the key numbers before asking for the memo.

Failure mode: The memo overstates confidence.
Fix: Tell Gemini to mark uncertain or partial evidence clearly.

Failure mode: Retest becomes a dumping ground.
Fix: Limit retests to specific hypotheses, such as a new hook on the same creator or the same hook on a different creator.

Failure mode: The decision memo ignores comments.
Fix: Upload one or two comment screenshots and ask for an audience-signal section explicitly.

Sources Checked

  • https://support.google.com/gemini/answer/14903178?co=GENIE.Platform%3DDesktop&hl=en (accessed 2026-03-25)
  • https://support.google.com/gemini/answer/13275745?co=GENIE.Platform%3DDesktop&hl=en (accessed 2026-03-25)
  • https://help.openai.com/en/articles/8400551-chatgpt-image-inputs-faq (accessed 2026-03-25)
  • https://help.openai.com/en/articles/10169521-projects-in-chatgpt (accessed 2026-03-25)
  • https://support.claude.com/en/articles/8241126-uploading-files-to-claude (accessed 2026-03-25)

Quarterly Refresh Flag

Review by 2026-06-23 to confirm the live product interfaces and supported file, image, audio, project, or notebook behaviors still match the current tools.

Something off in this workflow?

Related Workflows

How to Turn One Content Performance Export Into a Repurposing Priority Map With Gemini

Use Gemini to turn a CSV or XLSX content export into a repurposing priority map, a short decision memo, and a next-batch shortlist based on actual performance patterns.

Read Workflow

How to Turn Shorts Analytics Into a What to Make Next Memo With AI

Use AI to turn a short-form performance export into a simple memo that tells you what to make more of next week.

Read Workflow

How to Analyze Competitor Shorts and Build a Non-Copycat Angle Map With AI

Use AI to analyze competitor short-form videos from screenshots and notes, then build your own angle map without copying them.

Read Workflow
Reader Feedback

Help keep PromptedWork sharp

Share a broken step, outdated prompt, or general feedback. This is only for improving this specific workflow.