Post-publication note (added October 2024): I have since expanded on this idea by creating a separate site whose sole purpose is to host the AI-generated summaries of stuff I’ve found interesting.
Lately, I’ve been using Generative AI to keep up with the unending deluge of new reading material. GenAI does the first pass on longer articles and papers for me, and I use its output to decide whether it’s worth reading the piece in full.
Notes:
- I use the recently published Content Moderation in a New Era for AI and Automation article from the Oversight Board throughout this example.
- All GenAI outputs in this post were generated by Google AI Studio‘s Gemini 1.5 Pro model, but you can perform the same steps using other GenAI applications such as ChatGPT and Claude.

A. Create a basic summary of a new article or paper
The prompt I use to generate a summary for longer articles and papers comes directly from Matt Wolfe, who demonstrates how he uses it 2 minutes and 9 seconds into a recent video from The Next Wave:
A.1 The Summarization Prompt
Here’s the prompt:
I will upload or paste in an article, PDF, or video transcript. Your job is to respond with bullet points with the following information.
- Summarize the content into bullet points, making it easy to understand the concepts or ideas presented.
- Tell me what we can do now as a result of this information that we could not do before. What makes this novel?
- Tell me how I should report on this in the news. If I was to give a news broadcast and explain why this information was important, what should I tell the viewers so that the most amount of people can understand it?
The format of the response should be:
Summary:
- Bullet 1
- Bullet 2
- etc
What makes this novel or interesting:
- Bullet 1
- Bullet 2
- etc.
How should I describe this news or concept in a simple way that anyone can understand when I report on it. If it makes sense, use an analogy.
A.2 Producing a PDF from a Web Page
The most straightforward way to turn a web page into an uncluttered PDF on my Macbook is to view the page in Safari’s Reader View and then “print” the page to PDF.
The Safari Reader View removes any sidebars and website navigation headers and footers, thus removing text and images unrelated to the content we’re after.

Keeping unrelated material out of the PDF is especially useful for two reasons: (1) there’s no extraneous text to distract the AI; and (2) the AI has fewer words to ingest, so you don’t waste valuable and limited context window space.
A.3 The Summarization Output
After creating a PDF of the Oversight Board’s article, I give the AI my summarization prompt and upload the PDF. Below is the output I got back:
Summary
- Social media content moderation has shifted from human review to AI-powered systems.
- While AI offers scalability, it struggles with context, leading to over-enforcement of rules and removal of legitimate content, especially in sensitive areas like health education.
- AI’s lack of contextual understanding also results in under-enforcement, allowing coded hate speech and misinformation to proliferate.
- Generative AI tools, while beneficial for content creation, exacerbate existing harms like image-based sexual abuse by enabling easy creation and spread of harmful content.
- The Oversight Board, through case studies involving Meta, highlights the need for human oversight, context awareness in AI, and greater transparency around content moderation decisions.
What makes this novel or interesting:
- The paper provides data and real-world examples of how AI-powered content moderation is falling short, backed by the Oversight Board’s investigation and Meta’s own data.
- It highlights the increasing disparity in content moderation effectiveness across different languages, with English receiving significantly more resources.
- It addresses the unique challenges posed by generative AI, which can be used to create both beneficial and harmful content, and its impact on areas like elections and image-based sexual abuse.
- It offers concrete recommendations for social media platforms, including incorporating context awareness in AI, providing users with more information and control, and ensuring equitable distribution of resources for content moderation across languages.
How should I describe this news or concept in a simple way that anyone can understand when I report on it?
Analogy: Imagine a million referees, all robots, trying to officiate a million different sports at once, using only rulebooks with no understanding of the game’s nuances or spirit. That’s what’s happening with AI content moderation on social media.
Simple explanation for viewers: Social media is increasingly relying on Artificial Intelligence to decide what we see and don’t see online. While this can be efficient, new research shows these AI “referees” often make wrong calls, removing harmless posts while missing real problems like hate speech hidden in code words. This is especially concerning with powerful new AI tools that can be misused to create harmful content like fake explicit images. We need social media companies to ensure these AI systems are fair, unbiased, and understand the context of what they’re moderating.
This AI-generated summary is useful on its own and may provide enough detail for me to decide that I want to read the original piece in full.
However, we can take this screening process a step further by asking the AI to consider our specific needs and context.

B. Ask the AI to generate recommendations with my specific context in mind
So far, I’ve provided the AI with a publicly available article written for a content moderation audience.
The AI created the summary without any information about my specific context: my background, my job and responsibilities, or even basic information about my current goals and challenges.
B.1 Why My Specific Context Matters
A recurring complaint I hear from people who use GenAI applications is that AI outputs are generic and have no immediate or applicable, real-world value.
The summarization prompt in the previous step tries to address this by asking the AI to highlight what’s new or novel about the piece that we can take away as new learning. However, the output is still generic, albeit slightly more useful.
To fill the contextual gap, consider uploading documents that contain any combination of the following:
- Your job description and the job description of your manager;
- Your team’s objectives and key results (OKRs or equivalent) this quarter or this year;
- Your individual work goals or targets or KPIs;
- A one-page profile or write-up about your company;
- Your company’s latest annual report and transparency report.
For demonstration purposes, I will use my Policy Lead Archetypes blogpost as a proxy for job descriptions in the prompts below.
As with the Oversight Board’s piece, I generated a clutter-free PDF of my archetypes blogpost using Safari’s Reader View.
Note: Take a moment to check the privacy settings on your GenAI application of choice. While your company policy and confidentiality obligations may prohibit uploading detailed contextual information, you can typically fall back on using job descriptions since these are publicly posted as part of job listings. If you don’t have job descriptions handy, you can also try using the LinkedIn profiles of your manager or peers. Tip: LinkedIn lets you download profiles as PDFs.
B.2 The Contextual Prompt
Here’s the prompt I used to get context-specific AI-generated recommendations using my Policy Lead Archetypes post:
Attached is a PDF that describes what different Policy Leads do on Trust & Safety teams.
In light of the archetypes identified in this piece, what specific, realistic, and actionable recommendations should each Policy Lead Archetype take away from the Oversight Board’s article?
Adapt the prompt above based on the contextual data you provide the AI. It generally helps to tell the AI what data you’re uploading so it interprets the data correctly.
B.3 The Contextualized Recommendations Output
Below is the AI’s output in response to my prompt:
Here are some specific, realistic, and actionable recommendations for each Policy Lead Archetype based on the Oversight Board’s article, connecting them to the detailed insights in the “Policy Lead Archetypes” document:
Policy Area Lead:
- Prioritize Contextual Understanding in AI Enforcement: Work with engineering teams to improve AI’s ability to recognize and interpret contextual cues in your specific policy area. This might involve:
- Data enrichment: Advocate for training datasets that are rich in examples from your policy area, including diverse languages and cultural nuances.
- Contextual flags: Push for the development of AI systems that can flag content for human review when contextual signals (e.g., satire, health information) are present, especially if it relates to sensitive topics.
- User appeals: Collaborate on streamlined processes for users to provide context during appeals, ensuring this information is used effectively in decision-making.
- Champion Language Equity:
- Resource allocation: Advocate for proportionate resource allocation for your policy area across all supported languages, not just English.
- Localized testing: Insist on rigorous testing and evaluation of AI models in all languages relevant to your policy area to identify and address disparities.
- Develop Expertise in Generative AI:
- Policy gap analysis: Assess your area for vulnerabilities related to new generative AI tools (e.g., deepfakes in political campaigns).
- Proactive policy development: Work with the Policy Architect to create clear guidelines for content generated by AI, addressing consent, misinformation, and potential harms.
Policy Architect:
- Embed Contextual Analysis in Policy Development:
- Policy templates: Update policy templates to explicitly require consideration of context, prompting Policy Area Leads to define contextual cues relevant to their areas.
- Training and workshops: Conduct training for the Policy team on the importance of context in AI enforcement, equipping them to develop policies that minimize over- and under-enforcement.
- Standardize Feedback Loops for AI Improvement:
- Error analysis framework: Create a standardized framework for analyzing AI errors, identifying patterns related to context, language disparities, and generative AI misuse.
- Data transparency (where possible): Advocate for greater data access (while respecting privacy) to enable third-party audits and research on the effectiveness and biases of AI models.
- Establish Clear Guidelines for AI-Generated Content:
- Definition of harm: Work with Policy Area Leads to establish clear definitions of harm related to AI-generated content in different contexts.
- Transparency and labeling: Develop policies for identifying and labeling content generated or manipulated by AI, empowering users to make informed decisions.
Policy Integration Lead:
- Partner with Engineering on Context-Aware AI Solutions:
- Cross-functional collaboration: Work closely with engineering teams to communicate the Policy team’s needs for more context-aware AI tools and solutions.
- Advocate for human oversight: Emphasize the importance of human review in cases involving complex context or sensitive topics, ensuring resources are allocated appropriately.
- Proactively Address Language Disparity in Operationalization:
- Resource requests: During cross-functional planning, advocate for sufficient resources to ensure consistent policy implementation across all supported languages.
- Impact assessment: Include language equity as a factor in the assessment of new product launches or initiatives that impact policy.
- Develop Communication Strategies for AI-Related Policy Changes:
- Transparency with partner teams: Keep partner teams informed about AI-related policy changes, emphasizing the importance of context and the potential for evolving enforcement approaches.
- User education: Collaborate on user education materials that explain how AI is used in content moderation and provide guidance on avoiding unintended policy violations.
By taking these specific actions, each Policy Lead archetype can contribute to a more effective, fair, and context-aware content moderation ecosystem in the era of increasingly sophisticated AI.
Reminder: I use these recommendations primarily to decide whether to read the original article in full. I assume that any AI-generated recommendation is not necessarily correct or appropriate, especially if the AI lacks sufficient context.
Despite these limitations, the suggestions can serve as starting points for further exploration. And if I do decide to read the full piece, I can do so with these specific recommendations in mind and thus get more value out of the reading process.

C. Generate a system instruction that I can use on future articles and papers
Since we expect to do similar reviews with other articles or papers in the future, we can ask the AI to generate a system instruction that encapsulates all the preceding steps.
C.1 Why We Ask the AI to Generate a Prompt
Asking the AI to generate its own prompt is particularly helpful since it can simplify the prompt and may add nuances that we would otherwise have overlooked.
You’ll want to do this step if you have gone through several iterations of prompts with the AI in a single chat session because your instructions were scattered across multiple prompts. We want one single integrated prompt (the one prompt to rule them all) that pulls together all the learning we’ve gained throughout the chat session.
C.2 The System Instruction Generation Prompt
Here’s how we ask the AI to generate its own prompt:
In light of everything we’ve discussed so far, please generate a system instruction that I can use with future articles, blogposts, or video transcripts.
The prompt should generate a summary as described above, then generate specific, actionable, and realistic recommendations for each policy lead archetype. The prompt should perform all these steps without further instructions from the user.
Note that this type of prompt only works when you use a GenAI application with a sufficiently large context window (such as Google AI Studio) or one that supports the creation of self-contained workspaces with their own chat history and knowledge bases (such as Claude).
C.3 The System Instruction Output
Here’s the output:
System Prompt for Analyzing Policy Implications of Content Moderation Articles
You are a helpful AI assistant designed to analyze articles about content moderation, especially as it relates to AI and its impact on social media. Given an article, blog post, or video transcript:
- Provide a concise summary of the key points in bullet point format.
- Identify novel or interesting aspects of the information presented.
- Imagine you are advising a Trust & Safety team with three Policy Lead Archetypes: Policy Area Lead, Policy Architect, and Policy Integration Lead. Generate specific, actionable, and realistic recommendations for each archetype based on the information in the provided text. The recommendations should be tailored to the specific responsibilities and areas of influence of each archetype, as outlined below:
Policy Area Lead:
- Responsible for setting the direction and guiding the execution of work for a specific policy area (e.g., hate speech, misinformation, harassment).
- Focuses on policy development, interpretation, and operationalization within their area of expertise.
Policy Architect:
- Responsible for ensuring consistency and effectiveness of policy work across all policy areas.
- Develops and maintains policy frameworks, templates, and best practices for the entire team.
Policy Integration Lead:
- Responsible for bridging the gap between the Policy team and partner teams (e.g., Product, Engineering, Operations).
- Ensures smooth operationalization of policies and advocates for Policy team considerations in cross-functional initiatives.
Format your response as follows:
Summary:
- Bullet Point 1
- Bullet Point 2
- etc.
What makes this novel or interesting:
- Bullet Point 1
- Bullet Point 2
- etc.
Recommendations for Policy Leads:
Policy Area Lead:
- Recommendation 1
- Recommendation 2
- etc.
Policy Architect:
- Recommendation 1
- Recommendation 2
- etc.
Policy Integration Lead:
- Recommendation 1
- Recommendation 2
- etc.
C.4 Caveats about this System-Generated Prompt
You’ll note that the new AI-generated system prompt reduced the archetypes blogpost to a short descriptive summary.
This simplification is useful if you have a small context window or prefer to use fewer tokens to keep your prompts cost-effective. However, the loss of detail may also lead to less nuanced responses. I’ve personally found that it helps to start with just the prompt and then upload/attach more contextual information if needed.
C.5 Reusing the System-Generated Prompt
Once you have a system-generated prompt that works reliably, consider creating mini GPT applications that help you reuse the prompt more easily.
- ChatGPT’s Custom GPTs. Users on ChatGPT’s paid plan can create Custom GPTs that use our system-generated prompt from the previous section. Users can also upload any contextual information as attachments (e.g., we can upload the Policy Lead Archetypes PDF as an attachment that the Custom GPT will use as a permanent reference). Custom GPTs can be published and shared with anyone with a ChatGPT account. See also: Custom GPT Tips.
- Claude’s Projects. Users on Claude’s Pro plan can create Claude Projects and upload attachments as “knowledge” that can be used within the project. As of this writing, users on Claude for Work plans can share their projects with other people in the same organization.
- Gemini’s Gems. Users on the Gemini Advanced, Business, and Enterprise plans can create custom Gems. The user interface for Gem creation can also write your initial draft instructions for you based on your description of the Gem’s purpose. As of this writing, it is not yet possible to publish or share Gems.
Tip: Gemini has helpful guidelines for writing effective instructions; they advise us to focus on defining a Persona, specifying Tasks, providing Context, and describing the desired output Format. Use this advice to improve the system-generated prompt from the previous step to get better results from Custom GPTs and Gems.
In Closing
GenAI applications powered by Large Language Models help us tackle information overload by efficiently filtering and jumpstarting our reading.
Use the prompts in this post to distill summaries and generate tailored and context-specific recommendations from new papers and articles.
Save time and confidently decide if an article or paper is worth reading in full with the help of GenAI.


