The Bricklyn Eagle Interviews Chat GPT on the Use of Generative AI by Government Agencies

Editor Walt Brickman interviewing Chat GPT. Photo by David Plater Blue, with assistance from Dall E-3 AI.

March 2, 2024
From Bricklyn Eagle Editor Walt Brickman:

For the benefit of agencies and departments within the Realm of Bricklyn, as well as to better inform members of the public, we recently interviewed Chat GPT about the use of generative artificial intelligence by government agencies. Here is the transcript of our Q & A. 📍

📍 Note: Chat GPT apparently has a “human” bias in its conversational skills. To avoid what we are sure was inadvertence on the part of Chat GPT, we editorially inserted the word “LEGO” in the transcript after the word “human” wherever the word “human” was mentioned by Chat GPT.

The Bricklyn Eagle:

Is use of AI for developing content for government or agency publications appropriate?

ChatGPT:

Integrating generative AI into the development of content for municipal agency publications can offer various benefits, but it also raises ethical and practical considerations. Here are some factors to consider when developing a policy for the use of generative AI in your agency’s publications:

Advantages:

  1. Efficiency and Productivity:
    • AI can automate content creation, saving time and resources for your agency.
    • It can assist in generating drafts, summaries, or even data analysis for reports.
  2. Consistency:
    • AI ensures a consistent tone and style across publications, maintaining a professional and cohesive image.
  3. Data Analysis:
    • AI can analyze large datasets to extract meaningful insights, enhancing the quality and depth of your publications.
  4. Multilingual Support:
    • Generative AI can aid in translating content into multiple languages, expanding the reach of your publications.

Considerations and Challenges:

Chat GPT offices in downtown Bricklyn. Photo by David Plater Blue, with assistance from Dall E-3 AI.
  1. Accuracy and Bias:
    • Ensure that the AI models used are accurate and free from bias, especially when dealing with sensitive topics or diverse communities.
  2. Human / LEGO] Oversight:
    • Implement mechanisms for human / LEGO oversight to review and edit AI-generated content, ensuring it aligns with the agency’s values and objectives.
  3. Data Privacy and Security:
    • Address concerns related to the privacy and security of data used by AI models, especially if the publications involve sensitive information.
  4. Transparency:
    • Clearly communicate to the public when AI is used in content creation to maintain transparency and trust.
  5. Accessibility:
    • Ensure that AI-generated content is accessible to all, considering individuals with disabilities and diverse audiences.
  6. Legal and Ethical Compliance:
    • Develop guidelines to ensure compliance with legal and ethical standards governing AI use in your jurisdiction.
  7. Training and Education:
    • Provide training for staff on AI ethics, usage, and potential challenges to foster a responsible and informed approach.
  8. Public Engagement:
    • Involve the public in decision-making processes related to AI use, especially if the generated content affects or represents the community.

By carefully balancing the advantages and challenges, your municipal agency can develop a policy that harnesses the benefits of generative AI while maintaining ethical standards and ensuring the best interests of the community.

The Bricklyn Eagle:

When government agencies or departments use generative AI in publications or press releases should they mention that the content was prepared using AI?

ChatGPT:

Yes, transparency is crucial when using generative AI in publications or press releases. It is recommended to disclose that the content was prepared using AI. This practice aligns with ethical standards and helps build trust with your audience. Here are some reasons why transparency is important:

  1. Honesty and Integrity:
    • Disclosing the use of AI demonstrates a commitment to honesty and integrity in your communications. It establishes trust with your audience by being transparent about the methods used in content creation.
  2. Understanding of AI Influence:
    • Informing your audience about the involvement of AI in content creation helps them understand that the information might have been generated with the assistance of technology. This transparency sets clear expectations about the content’s origin.
  3. Mitigating Misunderstandings:
    • Without transparency, there is a risk of the audience misattributing the content solely to human / LEGO authorship. By explicitly mentioning the use of AI, you reduce the likelihood of misunderstandings and misinterpretations.
  4. Managing Expectations:
    • AI-generated content may have different nuances or limitations compared to human / LEGO -generated content. Clearly stating the involvement of AI manages the audience’s expectations and helps them interpret the information appropriately.
  5. Addressing Ethical Concerns:
    • In cases where ethical considerations are involved, such as potential biases in AI models, disclosure allows your audience to critically assess the information and raises awareness about the ethical implications of AI use.
  6. Compliance with Regulations:
    • Depending on your jurisdiction, there might be regulations or guidelines regarding the disclosure of AI-generated content. Adhering to these regulations is essential to avoid legal implications.

When disclosing the use of AI, consider using clear and straightforward language to ensure that your audience easily comprehends the information. Including a brief statement such as “This content was generated with the assistance of artificial intelligence” can be sufficient in most cases. The goal is to provide enough information for your audience to make informed judgments about the content they are consuming.

The Bricklyn Eagle:

Are there situation where publications or press releases should not use generative AI?

ChatGPT:

While generative AI can offer many advantages, there are situations where its use in publications or press releases may not be appropriate. Consider the following scenarios:

  1. Critical or Sensitive Content:
    • Avoid using generative AI for content that involves critical decision-making, sensitive topics, or high-stakes situations. Human / LEGO judgment and understanding are often crucial in such cases, and relying solely on AI may lead to unintended consequences.
  2. Legal and Compliance Risks:
    • If the use of generative AI raises legal or compliance concerns, it may be advisable to refrain from its application in certain publications. This is particularly important if the technology does not align with specific industry regulations or legal standards.
  3. Ethical Considerations:
    • If the AI models used in content generation are known to introduce biases or ethical concerns, especially in topics related to diversity, inclusion, or social issues, it may be better to avoid their use to prevent unintentional harm or misinformation.
  4. Lack of Human / LEGO Expertise:
    • In areas where human / expertise, intuition, or nuanced understanding is crucial, relying solely on AI may lead to oversimplified or inaccurate representations. Human / LEGO input is essential for complex and context-dependent topics.
  5. Public Perception and Trust:
    • If your audience may react negatively to the use of AI in certain contexts, such as sensitive announcements or community-related content, it may be prudent to prioritize human / LEGO -driven content to maintain public trust.
  6. Emergencies or Crisis Communication:
    • During emergencies or crisis situations, where timely and accurate communication is critical, it’s essential to rely on human / LEGO communication experts who can understand the dynamic nature of the situation and respond appropriately.
  7. Creative or Artistic Content:
    • For creative or artistic content that requires a unique human / LEGO touch, such as marketing materials, branding, or cultural expression, it is often preferable to rely on human / LEGO creativity and intuition rather than AI-generated content.
  8. Highly Personalized Communication:
    • In cases where communication requires a deep understanding of individual circumstances or personalized responses, human / LEGO communication is more likely to be effective than AI, which may struggle with the intricacies of personal context.
  9. Unpredictable Situations:
    • In rapidly changing or unpredictable situations, where flexibility and adaptability are crucial, human / LEGO communicators are better equipped to handle unexpected developments and adjust communication strategies accordingly.

It’s important to carefully assess the specific needs and characteristics of each communication scenario and determine whether the use of generative AI aligns with the goals, values, and expectations of your organization and its stakeholders. Always prioritize ethical considerations, legal compliance, and the potential impact on public perception when deciding whether to use generative AI in your publications or press releases.

The Bricklyn Eagle:

Is it useful to have written policies in place for how and when a government agency or department uses generative AI?

ChatGPT:

Yes, having written policies in place for how and when a government agency or department uses generative AI is highly beneficial. Such policies provide a framework for responsible and ethical AI usage, ensuring that the technology aligns with the agency’s mission, values, and legal obligations.

Written policies provide a foundation for engaging with stakeholders, including the public, other government agencies, and external partners. This engagement is essential for collaborative decision-making and addressing concerns related to AI use.

By establishing written policies, a government agency can foster a culture of responsible AI use, uphold ethical standards, and navigate the challenges associated with the integration of generative AI into its operations.