This project focused on streamlining teacher performance evaluation processes that were previously manual and time-intensive. Through rapid prototyping and auto-assessment feature integration, the project aimed to help supervisors (Principals and Education Departments) evaluate hundreds of teachers more efficiently while maintaining full supervisor control over final assessments.
Role
Process
Output
Every month or quarter, supervisors (Principals or Education Department officials) are responsible for conducting teacher performance evaluations (Periodic Assessments). These evaluations are critical as they form the basis for bonus distribution. However, in practice, this process is extremely time-consuming because the number of teachers being evaluated can reach hundreds, with manual input required for each individual assessment—compounded by supervisors' severely limited time due to other responsibilities.
Against this backdrop, an initiative emerged to find solutions enabling faster evaluation processes without reducing accuracy or supervisor control. One proposed approach was implementing an auto-assessment system that would automatically provide initial scores to each teacher. Supervisors could still modify scores when necessary, preserving manual intervention capabilities.
In the next phase, we entered the discovery phase by creating prototypes that would serve as the primary reference for stakeholder discussions. Typically, creating one truly full-function prototype requires at least a full day. However, with the combined power of Figma and Vercel, this process could be compressed to just minutes.
This provided significantly more room for exploring various options or design variations without time constraints. Additionally, this approach enabled much faster discussions, allowing decisions regarding timelines and execution items to be implemented immediately without barriers.
1. Main Page Creation in Figma
The first step involved designing the main page as an initial reference. Focus remained solely on the homepage, as subsequent pages would be assisted by V0 in later processes.
After completing the main page design in Figma, the design was exported as an image file (PNG or JPG). This image was then input to V0 as visual reference to assist subsequent UI generation processes.
3. Context-Appropriate Prompting
At this stage, we provided prompts explaining the context and expectations for desired pages or flows. For multiple user flow variations, these could be separated into different projects. This phase was inherently trial-and-error—experimenting until results approached expectations.
4. Fork Sharing in V0 (Optional)
When we wanted others to develop or review created prompts, we used the fork feature. This allowed prompt sharing and reuse by team members, especially when needing iteration or input from other designers.
5. Stakeholder Prototype Sharing
Once V0 generation results were deemed satisfactory, prototypes could be immediately shared with stakeholders for feedback or to assist their decision-making processes.
6. Process Repetition as Needed
This process remained flexible and non-rigid. It could be repeated and adapted to team needs or project dynamics. The goal was creating design variations as quickly as possible while maintaining relevance and immediate testability.
Positive Impact:
Speed matters when timelines are tight. Vercel + Figma proved an ideal combination for rapid yet interactive demonstrations.
Stakeholders make better decisions when presented with real experiences rather than just presentations or static mockups.
Auto-assessment doesn't need complexity from the start. Using basic rulesets initially was sufficient for communicating the bigger idea.
AI can accelerate repetitive routine processes, but maintaining manual control space remains important so users feel trusted and empowered.
Rapid prototyping enables risk-free experimentation, allowing teams to test multiple approaches quickly and pivot based on real feedback rather than assumptions.