My Take on the Future of AI and Content (Martin Waxman)

This is a concise yet insightful analysis of the future of AI that Martin Waxman shared on LinkedIn. It reflects my views of and experience with AI. This comment is spot on: users are not sparring with AI, engaging in provocative discussions, asking it to challenge their assumptions and ideas. They accept what it gives them and move on, choosing the easy way out.

Martin is an adjunct professor at the York University Schulich School of Business, and associate director of the School’s Future of Marketing Institute, a global think tank that examines and analyses tech trends and their implications to marketing and communications.

Here’s a link to a video of the proceedings. (Martin’s panel starts around 10:00 a.m.)

Peace & Justice, MAA

Gemini created this image based on the text below.

100% Human

Chairman Smith, Deputy Chair Dasko, and Honorable Senators,

My name is Martin Waxman, and I want to certify that my remarks today are 100% human, and no AI was used to prepare them. That may sound like a funny way to begin, but I believe this type of disclosure is one way to maintain trust, as we establish guidelines around acceptable AI use cases for content and creativity.

I am an adjunct professor at the York University Schulich School of Business, and associate director of the School’s Future of Marketing Institute, a global think tank that examines and analyses tech trends and their implications to marketing and communications.

I write an AI and Digital Marketing Trends newsletter that has over 500,000 subscribers and create courses on generative AI for LinkedIn Learning.

I also train students and marketing and communications professionals how to use AI beyond simple content generation. And how to develop detailed and structured prompts that combine natural-language and systems thinking.

In addition, I’ve also published two books of fiction and am a former film and television writer. My background is in the arts, not coding.

So, I appear before you today highly conflicted about the role AI will play in content production and distribution and its effect on writers and creators.

The Duality of AI

On one hand, I am very excited. On the other, I’m unsure how to deal with the unprecedented pace of change and uncertainty we’re all living through every day.

I can see incredible potential for AI to help provide writers with an endless array of new ideas and perspectives they would not have otherwise seen.

I also know that in the creative industries, like marketing and PR, professionals are under a great deal of pressure to produce larger and larger quantities of content. And that generative AI has the potential to take a good part of the load off their shoulders.

Yet, when I watch how people use AI – including my students – I am often disheartened by how much thinking they outsource to a machine. And that, along with the many dire predictions around job cuts frightens me.

You see, rather than having a challenging conversation with a large language model, many people use AI as a shortcut to avoid the difficult task of thinking. They make a generic request, have a brief exchange and get an output that looks good enough, but lacks the quality and spark of something unique. And they’re unable to see beyond the mediocrity of their output because they’re missing subject matter expertise.

They’re not sparring with AI, engaging in provocative discussions, asking it to challenge their assumptions and ideas. They accept what it gives them and move on, choosing the easy way out.

I’m concerned many people, and especially new professionals, will miss out on the valuable experience they get by anguishing over a blank screen, taking the time to reflect on an idea, letting it percolate and revising it into a polished piece of work.

  • But if we decide to offload our thinking, ideas and the creative labour involved in content production to a machine, how will young people ever develop their craft, imagination and determination?
  • What role will they have in the workplace?
  • I also wonder who owns the output of a piece of content that’s AI generated? How much of a contribution does a person have to make to claim authorship?
  • What laws and regulation are needed to pave the way to a promising future for everyone?

These are some of the difficult questions we must discuss and debate right now.

Three Areas of Focus

And while I don’t have the answers, I believe we should start by focusing on three areas: research, training and regulation.

Research into how these systems perform. Research into their emergent behaviour, when they go off the rails, scheme and present hallucinations as facts. Research into how people are using AI and whether their minds are being stimulated or atrophied by assigning too many cognitive tasks to machines.

We need to develop and implement a formalized AI training curriculum that starts in primary school and continues throughout a person’s education. And we must put an emphasis on teaching students how to think, analyse and make judgements for themselves.

We also need to design programs to retrain the people who may be displaced by AI and help them find other purposeful jobs.

And we need to develop policies, guidelines and regulations that are codified into law. That might be the most difficult part. It was challenging enough to put rules around the internet and social media. Finding a way to regulate AI is even harder because the platforms and capabilities are evolving so rapidly.

I’m an optimist by nature. I do believe AI has the potential to make us smarter, more creative and enhance the quality of the content humans produce.  But it’s incumbent on us to use it intelligently and guide its outputs rather than letting its outputs guide us.

I am happy to answer any questions you may have or to assist the committee and your study in any way today or in the future.

Thank you.

2 thoughts on “My Take on the Future of AI and Content (Martin Waxman)

  1. I asked https://thaura.ai/ to summarize the concerns about AI, including the environmental impact of data centers (electricity, water) and surveillance. Here’s its reply:

    Based on the search results, here are the key concerns about AI’s environmental and surveillance impacts:Environmental Impact

    Electricity Consumption:

    • AI data centers are experiencing explosive growth in energy demand, with electricity consumption potentially reaching 11 times 2023 levels by 2030 greenpeace.org
    • Power requirements for data centers in North America nearly doubled from 2,688 megawatts (end of 2022) to 5,341 megawatts (end of 2023) news.mit.edu
    • By 2026, data centers could consume 1,050 terawatt-hours annually, equivalent to the energy use of a country between Japan and Russia news.mit.edu
    • Training a single AI model like GPT-3 consumed 1,287 megawatt-hours of electricity—enough to power 120 average U.S. homes for a year news.mit.edu
    • A single ChatGPT query uses about five times more electricity than a simple web search 

    Water Usage:

    • Data centers require massive water for cooling—about two liters per kilowatt-hour of energy consumed news.mit.edu
    • Large facilities can consume up to 5 million gallons per day, equivalent to a town of 10,000-50,000 people eesi.org
    • A medium-sized data center uses roughly 110 million gallons of water annually for cooling eesi.org
    • Each 100-word AI prompt consumes about one water bottle’s worth of water (519 milliliters) eesi.org
    • Water consumption parallels energy use and emissions, creating significant strain on local water resources eesi.org

    Broader Environmental Impacts:

    • Most data center electricity comes from fossil fuels, increasing greenhouse gas emissions 
    • Chip manufacturing involves toxic chemicals and dirty mining practices for rare materials news.mit.edu
    • Electronic waste from short-lived hardware components contains hazardous substances like mercury and lead unep.org
    • Environmental impacts are concentrated in specific regions, creating disproportionate burdens on local communities blog.ansi.org

    Surveillance Concerns

    Government and Corporate Surveillance:

    • AI systems are intensifying surveillance capabilities and concentrating power greenpeace.org
    • ICE paid Palantir $30 million to build “ImmigrationOS,” an AI-powered surveillance platform for immigration enforcement greenpeace.org
    • Companies like Palantir defend their surveillance work with government agencies, with executives suggesting critics should protest “for more Palantir, not less” greenpeace.org
    • Historical context includes NSA’s mass surveillance programs and data centers that process vast amounts of communications data greenpeace.org

    Democratic Risks:

    • The rapid expansion of AI infrastructure “concentrates power, intensifies surveillance and deepens democratic risk” greenpeace.org
    • Surveillance technologies enable unprecedented monitoring capabilities that can be used for social control and population management
    • AI-powered surveillance systems raise concerns about civil liberties, privacy, and government overreach

    Community Impacts:

    • Local communities often bear the brunt of both environmental and surveillance impacts
    • Public resistance has emerged in places like New Brunswick, New Jersey, and San Marcos, Texas, where residents have protested data center development due to environmental concerns greenpeace.org
    • Surveillance technologies often disproportionately affect marginalized communities

    These concerns highlight the complex trade-offs in AI development, where technological advancement comes at significant environmental and social costs that are not always fully accounted for in the rush to deploy AI systems.🌱~229.1g CO₂ saved

    SourcesFeedback

  2. I followed up with a prompt about about gender, race, social class, and colonial bias. Here’s its reply:

    AI systems reflect and amplify existing societal biases across gender, race, social class, and colonial dimensions:Gender Bias

    Systemic Exclusion:

    • AI systems often trained on datasets with underrepresentation—women and people of color make up less than 5% of AI developers, while datasets are dominated by men (80% lighter-skinned individuals) time.com
    • Gender bias manifests in hiring algorithms, facial recognition systems, and content generation
    • Amazon’s facial recognition system had more difficulty identifying female faces than male faces media.mit.edu

    Content Generation Bias:

    • Large language models consistently generate content with regressive gender stereotypes unesco.org
    • When prompted about different professions, AI associates certain roles with specific genders
    • UNESCO study found 60% of content generated by GPT-2 was negative regarding LGBTQ+ people unesco.org

    Racial Bias

    Facial Recognition Failures:

    • Facial analysis software couldn’t detect dark-skinned faces until white masks were worn 
    • Amazon’s systems had significantly higher error rates for darker-skinned faces compared to lighter-skinned ones media.mit.edu
    • This creates what researchers call “the coded gaze”—bias that leads to discriminatory or exclusionary practices media.mit.edu

    Hiring and Employment:

    • AI resume screening shows pronounced racial bias—white-associated names were preferred 85.1% of the time vs. Black-associated names at 8.6% brookings.edu
    • Current approaches that remove explicit race references may actually worsen outcomes by making bias more subtle brookings.edu
    • Racial disparities in AI systems don’t necessarily correlate with existing workforce disparities, potentially amplifying inequalities brookings.edu

    Social Class Bias

    Economic Impact:

    • AI systems often trained on data from developed nations, creating blind spots for Global South contexts
    • Algorithmic decision-making in areas like insurance, mortgage loans, and hiring can perpetuate class-based discrimination blogs.icrc.org
    • The “pale male data problem” excludes working-class and poor communities from training datasets time.com

    Access and Control:

    • Marginalized communities rarely have input into AI development yet bear the brunt of harmful applications
    • Social class determines who benefits from AI advances versus who faces surveillance and discrimination

    Colonial Bias

    Historical Power Imbalances:

    • AI systems trained on data from colonial contexts often reinforce historical power structures pmc.ncbi.nlm.nih.gov
    • The colonial legacy of “modernity” established concepts of race and gender that continue to subordinate marginalized groups pmc.ncbi.nlm.nih.gov
    • Global South perspectives are underrepresented in AI development and training data

    Cultural Bias:

    • When prompted about different ethnicities, AI systems exhibited high levels of cultural bias unesco.org
    • Colonial frameworks shape how AI understands and represents different cultures and peoples
    • Black feminist scholarship highlights how AI systems rely on concepts established during colonial times to subordinate Black women pmc.ncbi.nlm.nih.gov

    Root Causes and Solutions

    Intersectional Challenges:

    • Bias is particularly harmful for those at intersections of multiple marginalized identities
    • Current technical solutions often address single dimensions of bias without considering intersectional impacts

    Proposed Solutions:

    • Diverse development teams with representation across gender, race, and cultural backgrounds unwomen.org
    • Training data that actively includes diverse social backgrounds and removes historical biases unwomen.org
    • Transparent systems with human oversight and accountability mechanisms
    • Regulatory frameworks that address systemic bias in AI deployment

    The evidence shows that AI is not neutral—it mirrors and amplifies existing societal inequalities. Addressing these biases requires fundamental changes in who designs AI systems, what data they’re trained on, and how they’re deployed in society.🌱~502.0g CO₂ saved

    SourcesFeedback

Leave a reply to Mark A. Ashwill Cancel reply