STEP 1: PRIORITIZE - Should we use GenAI for this?
● Business Case
1. Will this save significant time or cost?
2. Does this improve customer/employee experience?
3. Is this a repetitive task that humans find tedious?
4. Do we have success metrics defined?
● Risk & Readiness
5. Is the data we'd use clean and representative?
6. Can we start with a pilot/limited scope?
7. Do we have someone to oversee this project?
8. Is this low-stakes if it goes wrong initially?
STEP 2: APPLY - How do we implement safely?
● Data & Privacy
9. Are we using only data we're allowed to use?
10. Have we removed sensitive personal information?
11. Do we know where our data will be stored/processed?
12. Can we delete data from the AI system if needed?
● Governance & Controls
13. Do we have approval from management and legal?
14. Are there humans reviewing AI outputs before they're used?
15. Have we trained users on limitations and proper use?
16. Is there a clear escalation path for problems?
STEP 3: MEASURE - Is it working as expected?
● Performance Tracking
17. Are we hitting our success metrics?
18. Is the AI output quality consistent over time?
19. Are users actually adopting and using the tool?
20. Are we seeing the expected time/cost savings?
● Risk Monitoring
21. Are we checking the outputs for bias or unfairness?
22. Have there been any errors or inappropriate responses?
Note: For this question, 4 = No errors (best), 1 = Many errors (worst)
23. Are we staying within budget/resource limits?
24. Is the AI making decisions we can explain if asked?
STEP 4: ASSESS - Should we continue/expand?
● Value & Impact
25. Has this delivered measurable business value?
26. Are the benefits worth the costs and risks?
27. Would we recommend this to other teams?
28. What have we learned we can apply elsewhere?
● Ongoing Viability
29. Can we maintain this long term?
30. Are there any regulatory or policy changes we should consider?
31. Should we expand, modify or discontinue this use case?
32. What's our plan for the next review cycle?