ChatGPT User Grader Q1. Purpose and intent(Required) I usually know the task, but I don’t always define success before I start. I have a repeatable workflow for at least one task (eg email replies, proposals, content briefs) and I run it the same way each time. I start with the job-to-be-done (who/what/outcome) so the answer is aimed at a real business task. I use it when I’m stuck and hope it spits out something useful. Pick the one that matches a normal busy day. Not your best day.Q2. Context given(Required) I add a bit of background, but I often forget the audience and goal. I include constraints (UK tone, length, do/don’t say, format) so it can’t wander off into waffle country. I chuck in the question and only add context if the answer annoys me. I give just enough context: who it’s for, what we’re trying to achieve, and what ‘good’ looks like. Pick the one that matches a normal busy day. Not your best day.Q3. Constraints and rules(Required) I sometimes set one constraint (eg ‘keep it short’) but not consistently. I set the key constraints up front (tone, length, format, must-include, must-avoid). I rarely specify tone/length/format, then I complain it’s too long. Pick the one that matches a normal busy day. Not your best day.Q4. Iteration(Required) I run a quick refine loop (tighten, simplify, structure) until it’s actually ready to use. I take the first usable draft and tweak it myself. Job done. I usually accept the first answer, even if it’s a bit off, because I’m busy. If it’s not right, I do one follow-up prompt, but I don’t really run a refine loop. Pick the one that matches a normal busy day. Not your best day.Q5. Output shape(Required) I ask for bullets/steps when I remember, otherwise I just take what I get. I let it write freely, then I reshape it myself afterwards. I tell it the exact output shape I want (checklist, table, script, headings) so it’s usable immediately. Pick the one that matches a normal busy day. Not your best day.Q6. Quality control(Required) If it sounds confident, I assume it’s fine. I spot-check when something feels important or expensive to get wrong. I actively look for hallucinations: I ask it what it’s unsure about and where it might be wrong. For factual claims, I verify with an official source or ask it to cite sources I can check. Pick the one that matches a normal busy day. Not your best day.Q7. Reuse and templates(Required) I keep reusable prompt templates (even basic ones) and adapt them per task. Every time is a fresh prompt. I wing it. I’ve got a couple of ‘favourite prompts’ in my head, but they’re not consistent. Pick the one that matches a normal busy day. Not your best day.Q8. Business integration(Required) It’s a nice-to-have. It’s not really part of how we work. It helps on odd jobs, but it’s not embedded into processes. It’s built into day-to-day work: comms, docs, planning, and decision support. Pick the one that matches a normal busy day. Not your best day.Q9. Handling complex work(Required) I sometimes break tasks down, but I still tend to ask for ‘the whole thing’ in one go. I use it to produce reusable artefacts (briefs, checklists, templates) rather than one-off answers. I break complex work into steps and get it to ask me for missing info one thing at a time. I ask for a big answer and hope it covers everything. Pick the one that matches a normal busy day. Not your best day.Q10. Outcome awareness(Required) I can tell when it saved time, but I don’t measure it or systemise it. I can point to a measurable impact (time saved, fewer revisions, faster sales, better clarity) and I try to repeat that win. I don’t really track whether it helped. I just move on. Pick the one that matches a normal busy day. Not your best day.This field is hidden when viewing the formTotal ScoreYour email (to view your result)(Required) Enter your email to get a private link to view your result.