Lately, I’ve been experimenting with ChatGPT’s Deep Research (DR) feature. This post shares one specific experiment (including the DR output) in the hope that folks (including future me) can benefit from having this as a handy reference.
But first: if you don’t yet have a copy of the Trust Insights CASINO prompt framework for Deep Research prompts, I suggest you download that straight away since the process I use relies heavily on it. The folks at TI have very kindly made the PDF available for free, without any paywalls or email signups.
The Need: I spotted this message in one of my online Slack communities:
Does any one have a good AI policy that their organization uses? We need to get an AI policy in place and what our outside counsel gave me is way too restrictive!
It got me wondering:
- Wouldn’t it be great if there was a template that people could copy/paste from so they don’t have to start their AI Policy from scratch?
- Could I possibly jumpstart the process of creating a template by using ChatGPT’s Deep Research feature?
I figured it was worth a shot.
The Experiment
Step 1. Use AI to create the Deep Research Prompt
Rather than struggling to write my own Deep Research prompt from scratch, I turned to ChatGPT, with help from the Trust Insights CASINO prompt framework.
Me in a new chat:
I’d like some help creating an exhaustive Deep Research prompt that follows the best practice in the attached “Trust Insights Deep Research CASINO prompt framework” PDF. Please ask me a series of questions, step by step, one at a time, until we get to a deep research prompt that I can use in a separate chat.
ChatGPT helpfully asked a series of questions, which I gamely answered.
Below is a lightly redacted copy of our conversation. You can also view the actual ChatGPT conversation if you prefer.
ChatGPT:
Step 1: Context
Question:
What is the task or topic you are researching? Please describe it clearly — think about what you want the research to explore, investigate, or explain.Feel free to give any background that might help me understand the situation.
Me:
I’d like to come up with a template that people can use to craft the AI Policy of their company. It should serve as the starting point for a Chief of Staff or Chief AI Officer who has been tasked with driving a task force at the operating committee level that has been formed for this purpose. I want the template to be easy to use and to provide actual language that can be modified/edited. I don’t just want headings with placeholders. It should be actual language.
ChatGPT:
Step 2: Audience
Question:
Who will be the primary consumers of this research output? That is, who will use the AI policy template — and who will likely be reviewing or approving it later? Let’s be specific about their roles, needs, and level of expertise.
Me:
The template will be used by the Chief of Staff or the Chief AI Officer who is driving the work of the AI Task Force. The resulting AI Policy should withstand public scrutiny, including that of regulators. Since the regulatory regimes of different countries or regions vary, the template should also have sections that are tailored for companies operating out of specific regions.
Also, the template language should not talk of AI as replacing jobs, but rather enabling people who perform jobs. There should be a series of Appendices, each one tailored to a specific vertical (e.g., retail, banking, telecoms, finnancial services, manufacturing, shipping, healthcare logistics, etc.) where each appendix includes AI Policy language or guidelines that are specific to the needs of that vertical.
ChatGPT:
Step 3: Scope
Question:
What limitations or boundaries should apply to the research? This includes:
- Source requirements (e.g., specific jurisdictions, expert types, think tanks, etc.)
- Timeframes (e.g., only sources published after a certain year)
- Geographies (e.g., global but with focus on EU/US/Asia)
- Industries or sectors (you mentioned vertical-specific appendices — should we include all industries or focus on a few?)
Me:
Sources should be reputable and also practical, and each source used should be cited in an exhaustive bibliography, with links where I can verify them.
You must also confirm that the source can be accessed live, today, before citing them.
Use sources published after 2018. Include all industries mentioned, as well as any other industries that you come across in your research.
ChatGPT:
Step 4: Intent
Question:
Why are you commissioning this research now? What decisions or deliverables will it support?
And are there any second-order uses — like using it as a model, incorporating it into client work, or circulating it publicly?
Me:
It’s meant to be shared online so companies can use them. It should be useful for driving AI Strategy conversations with companies seeking AI strategy advice.
ChatGPT:
Step 5: Narrator
Question:
How would you like the document to sound — what tone and voice should it adopt?For example:
- Should it sound like a neutral analyst, a policy advisor, a legal strategist, or an AI governance expert?
- Should the tone be formal, accessible, boardroom-ready, action-oriented, etc.?
- Do you want the writing to feel modular and copy-pasteable, or more like a narrative report?
Let’s clarify the voice you want to come through.
Me:
Sound like a neutral analyst with a solid understanding of AI legal obligations without sounding overly lawyerly. The tone should be one that withstands public scrutiny. If there are legal reasons why specific language is included, that language should be footnoted with the reason why (e.g., “language to meet EU AI Act obligations”). The writing should be stakeholder ready — designed to address the interests of key stakeholders who must sign off on the AI Policy. It should be modular and copy-pasteable.
ChatGPT:
Step 6: Outcome
Question:
What must the Deep Research output absolutely include for it to be useful to you?
Feel free to list bullets or sections — anything that should be guaranteed in the output.
Me:
What do you suggest I include?
ChatGPT:
Based on everything you’ve shared — your intent, audience, scope, and narrator goals — here’s a suggested list of must-have outcomes that the Deep Research report should include . . .
[Note: ChatGPT’s response is redacted for brevity here, but don’t worry; the details are included in the full final prompt below.]
Me:
That’s great. Please ensure that the template covers all the areas that top consulting firms like McKinsey and Deloitte and credible publications like Harvard Business Review say should be included in an AI Policy document.
Some notes:
- I got the idea of getting ChatGPT to ask me a series of questions to get to the Deep Research prompt from Harper Reed’s “My LLM codegen workflow ATM” blog post. In it, he used a conversational LLM to ask him a series of questions until it had enough information to generate a requirements specification doc for a software idea that he’s thinking of building.
- When in doubt, you can always ask ChatGPT for its opinion on what the output should look like. It’s a great brainstorming partner.
Step 2: Review the generated prompt and tweak as needed.
Here’s the full Deep Research prompt as generated by ChatGPT:
🧠 DEEP RESEARCH PROMPT — AI POLICY TEMPLATE
Context:
We are developing a reusable, easy-to-use AI Policy template that companies can adopt and adapt as a starting point for crafting their internal AI policy. The template is intended to guide executive-level task forces — typically led by a Chief of Staff, Chief AI Officer, or similar senior leader — who have been tasked by the company’s operating committee to develop a comprehensive AI policy framework. The template should provide actual editable policy language (not just section headings or placeholders) that covers key considerations in AI usage, governance, compliance, and strategic enablement.Audience:
The primary users of the AI Policy template are executive leaders such as the Chief of Staff or Chief AI Officer who are leading the AI Task Force at their company. These individuals are responsible for coordinating with legal, compliance, IT, HR, product, and other departments to define a unified AI governance stance. The final policy will be reviewed by the company’s operating committee and potentially its board. It must also be suitable for public disclosure and withstand scrutiny from external stakeholders, including regulators, the press, and civil society organizations. The language in the policy must reflect a people-first approach — it should frame AI as a tool for enabling and augmenting workers, not replacing them.Scope:
- Use only reputable, practical sources — including government guidance, regulatory agency publications, legal analyses, consulting firm frameworks, academic publications, and expert commentary.
- Accept sources published from 2018 onwards and ensure that every cited source is accessible live at the time of research.
- Provide an exhaustive bibliography with active links and brief source descriptions.
- Include regional clauses and considerations for companies operating in:
- The European Union (e.g., EU AI Act)
- The United States (e.g., FTC, state-level laws)
- Asia-Pacific (e.g., Singapore’s AI governance framework)
- Include industry-specific appendices for:
- Retail
- Banking
- Telecommunications
- Financial Services
- Manufacturing
- Shipping
- Healthcare Logistics
- Any additional industries surfaced during research
Intent:
The AI Policy template is intended for public sharing and broad reuse. It is designed to help companies accelerate their internal AI policy development process and ensure stakeholder readiness. In parallel, the template will serve as a strategic artifact to support AI consulting conversations and demonstrate subject matter expertise.Narrator:
The report should be written in the voice of a neutral analyst with a solid understanding of AI governance and legal obligations. The tone must be stakeholder-ready, appropriate for executives, boards, legal teams, and compliance officers — while remaining plainspoken and actionable. The writing should be modular and copy-pasteable, with clear, editable language. When legal or regulatory language is used, footnote it to explain which requirement or framework it addresses (e.g., “to meet EU AI Act requirements”).Outcome:
The research output must include:
- A full AI Policy template with editable language, covering:
- Purpose, Scope, and Definitions
- Roles and Responsibilities
- Governance and Oversight
- Risk Classification and Mitigation
- Data Privacy and Security
- Transparency and Explainability
- Human Oversight and Accountability
- Vendor and Procurement Requirements
- Internal Use Guidelines and Enforcement
- Generative AI and third-party tool usage
- Language that emphasizes AI as enabling—not replacing—workers
- Footnotes for regulatory clauses where applicable
- Appendices with industry-specific policy content for at least:
- Retail, Banking, Telecom, Financial Services, Manufacturing, Shipping, Healthcare Logistics
- Include others as surfaced by research
- Regional clauses or flags for legal alignment in EU, US, and APAC
- A one-page executive summary explaining what the document is, who it’s for, and how to use it
- An annotated bibliography with full source details and live verification links
The resulting prompt is more exhaustive than anything I could have come up with on my own, so I’m pleased with that result.
Step 3: Give Deep Research the generated prompt
I started a new chat, selected the Deep Research Tool, and paste in the generated deep research prompt.
In response to the prompt I provided, ChatGPT had a few quick questions, which I include below. You can also view the actual Deep Research chat.
ChatGPT:
Thanks for the detailed prompt. Before I begin, could you confirm whether you would like this policy template to also include references to existing internal IT or HR policies (e.g., acceptable use policies, employee code of conduct), or should it remain standalone and only reference public regulations and best practices? Additionally, should the annotated bibliography include brief assessments of each source’s credibility and practical relevance?
Me:
Yes, please allow references to internal policy, but don’t invent policy for that — just indicate that the template should link to existing sources. Credibility assessment of sources is good.
ChatGPT:
Great. I’ll begin developing a comprehensive, stakeholder-ready AI Policy template with fully editable language, designed for executive-led AI task forces. This will include:
- A full modular AI policy template with regulatory-aligned content
- Industry-specific appendices (e.g., Retail, Banking, Telecom, etc.)
- Regional legal clauses (EU, US, APAC)
- Executive summary
- Annotated bibliography with live links and source credibility assessments
I’ll also include placeholders for internal policy references without inventing any internal content. I’ll let you know as soon as the full policy framework and supporting materials are ready.
ChatGPT diligently plugged away at it while I logged off to grab lunch. When I logged back in an hour or so later, I saw this little blurb:
Research completed in 34m · 21 sources · 115 searches
Step 4: Review the AI Policy research report
ChatGPT’s Deep Research feature presents its output as a separate, shareable chat. See AI Governance Policy Template report if you prefer to read the output as a web page.
If you are the user who commissioned the Deep Research report, ChatGPT also makes the resulting report available to you as a PDF. The output of this specific Deep Research prompt was available to me as a downloadable 40-page PDF (members of my free Resources Google Group can download the PDF from the group).
Observations and Caveats
Some thoughts after reviewing the Deep Research output:
- Not perfect, but very useful. I don’t expect a company to use every section of the policy document (especially since I injected my own “AI as augmentation, not replacement” viewpoint). I would have loved to see it go more deeply into additional safeguards for agentic AI (as a sub-class of high-risk uses) and prohibited uses of AI in areas or fields that can lead to privacy violations, racial profiling, or other forms of real-world harm. However, the output is certainly a great starting point for discussion and some sections can be reused as-is.
- The prompt really steers the research direction. As a consequence of how I’ve drafted the Deep Research prompt, this AI Policy document is best suited for a large company, with its own IT, Security, and Privacy policies. A small to mid-sized organization that does not have the related policies would likely benefit from a simpler policy document with broader scope. Modify the prompt or generate your own with ChatGPT’s help to get a report that’s better suited to your needs.
- Useful industry starting points. The industry-specific Appendices are useful as a starting point for crafting AI Policies tailored to that audience, but they are by no means exhaustive on their own.
- No substitute for legal sign-off. Anything related to regulatory or legal obligations needs to be fact-checked, double-checked, and red-lined by a lawyer licensed to practice in that jurisdiction. You’ll want someone who has spent considerable time and effort understanding the AI regulations in their part of the world.
- Strip out language you can’t enforce. Any policy statement without corresponding enforcement mechanisms is not only useless, it could also potentially create liability for the company. While I am not a lawyer, I strongly advise against adding to your official AI Policy any statement that you can’t reasonably monitor or enforce in an auditable way, especially if you work in a regulated industry, are operating under a consent decree, or are located in a jurisdiction where you can expect to be fined or sued for regulatory violations.
- Check out this interesting resource. The last citation in the deep research report links to AI Templates and Resources provided by The City of San José. That page, and the resources linked therein, are well worth a closer look.
Closing Thoughts
OpenAI’s Deep Research feature is impressive. I’ve used the feature several times now and its ability to search, synthesize, and organize publicly available material still impresses me each time.
Taking the time to craft an exhaustive Deep Research prompt is well worth the effort because OpenAI’s Deep Research feature is severely rate-limited. As of this writing, the monthly Deep Research limits are:
- Free users – 5 tasks/month using the lightweight version
- Plus, Team, Enterprise, Edu users – 10 tasks/month, plus an additional 15 tasks/month using the lightweight version
- Pro users – 125 tasks/month, plus an additional 125/month using the lightweight version
I have a Plus account, so this experiment consumed one of my 10 allotted (non-lightweight) Deep Research tasks this month. I don’t know how exhaustive the output would have been if I had used the lightweight version.
All of the usual caveats about verifying LLM outputs apply to deep research reports. There’s no getting away from the need to verify each source that’s cited. However, when you consider the time and effort that this deep research feature has already saved, verifying the output is one step I actually don’t mind, especially when the act of reading the vetted source material gives you additional details and nuance you would have otherwise missed.
All the Links
- OpenAI Deep Research FAQ
- Trust Insights CASINO Deep Research Prompt Framework
- ChatGPT conversations:
- Deep Research Report generated by ChatGPT:
- As web page: AI Governance Policy Template
- As a Downloadable AI Governance Policy Template PDF (available to members of my free Resources Google Group)


