A Creative Designer
belgeade-Based
( Works )
Flag Bad Mockups
2025
Product Design
Designed the Flag Bad Mockups feature to allow users to report low-quality or incorrect mockup outputs, creating a structured feedback loop between users, internal reviewers, and AI systems. The feature improved trust, product quality, and long-term system learning without disrupting user workflows.

Context & Constraints


Dynamic Mockups relies on AI-assisted generation and complex rendering logic, which occasionally produces incorrect or low-quality results.

The challenge was to let users report issues without adding friction, blame, or support overhead, while ensuring feedback could be acted on internally.

Constraints included:

  • manual internal review (no automated moderation yet),
  • no impact on user accounts or credits,
  • and the need to reuse feedback across multiple systems (AI prompts, PSDs, rendering logic).

Problem Definition


Users had no clear way to report bad outputs when something went wrong.

This resulted in:

  • frustration and loss of trust,
  • repeated generation attempts,
  • and missed opportunities to improve the system using real-world feedback.

The product needed a lightweight reporting mechanism that benefited both users and the platform.

Strategy & Approach


I approached this as a quality feedback system, not a support feature.

The goal was to normalize failure as part of AI creation, while making it clear that feedback directly improves the product.

The experience was designed to be optional, fast, and respectful of the user’s time.

Key Design Decisions

  • Introduced a clear “Flag Bad Mockup” action at the moment of failure, not buried in support.
  • Designed a structured reporting flow with predefined reasons (e.g. incorrect rendering, visual artifacts, wrong layout) to ensure feedback was actionable.
  • Ensured reporting had no negative impact on user accounts, credits, or access.
  • Routed reports to an internal review process where feedback is used to improve AI prompts, PSD templates, and rendering logic.
  • Designed the flow as reusable Figma components, allowing future expansion (e.g. user notifications when issues are fixed).

Collaboration & Execution


I worked closely with product and engineering to align UX decisions with real operational workflows.

The feature was designed in Figma, reviewed against internal processes, and implemented to support manual review while remaining scalable for future automation.

Outcome & Impact


Users gained a clear way to report issues, increasing trust in the platform.

Internally, the team received structured, high-quality feedback that could be reused across multiple improvement areas, leading to better outputs over time.

Learnings & What I’d Do Differently


Users are willing to help improve AI systems when the cost is low and intent is clear.

Next, I would close the loop by notifying users when a reported issue has been resolved, reinforcing the value of their feedback.

What This Demonstrates


This case demonstrates my ability to design feedback-driven quality systems, align user trust with internal operations, and improve AI products through thoughtful, scalable UX decisions.

More works