Skip to main content

Common Sense Media

Movie & TV reviews for parents

Use app
  • For Parents
  • For Educators
  • Our Work and Impact
Language:
Español (próximamente) - volver al inicio

Or browse by category:

  • Movies
  • TV Shows
  • Books
  • Apps
  • Games
  • Parenting
  • Movies
    • Movie Reviews and Lists
      • Movie Reviews
      • Best Movie Lists
      • Best Movies on Netflix, Disney+, and More
      • Common Sense Selections for Movies
    • Marketing Campaign
      • 50 Modern Movies All Kids Should Watch Before They're 12

    • The Common Sense Seal
      • Common Sense Selections for Movies

  • TV
    • TV Reviews and Lists
      • TV Reviews
      • Best TV Lists
      • Best TV Shows on Netflix, Disney+, and More
      • Common Sense Selections for TV
      • Video Reviews of TV Shows
    • Marketing Campaign
      • Best Kids' Shows on Disney+

    • Marketing Campaign
      • Best Kids' TV Shows on Netflix

  • Books
    • Book Reviews and Lists
      • Book Reviews
      • Best Book Lists
      • Common Sense Selections for Books
    • Article About Books
      • 8 Tips for Getting Kids Hooked on Books

    • Marketing Campaign for Books
      • 50 Books All Kids Should Read Before They're 12

  • Games
    • Game Reviews and Lists
      • Game Reviews
      • Best Game Lists
      • Common Sense Selections for Games
      • Video Reviews of Games
    • Marketing Campaign
      • Nintendo Switch Games for Family Fun

    • Marketing Campaign
      • Common Sense Selections for Games

  • Podcasts
    • Podcast Reviews and Lists
      • Podcast Reviews
      • Best Podcast Lists
      • Common Sense Selections for Podcasts
    • Podcast Article Callout
      • Parents' Guide to Podcasts

    • Marketing Campaign
      • Common Sense Selections for Podcasts

  • Apps
    • App Reviews and Lists
      • App Reviews
      • Best App Lists
    • Marketing Campaign
      • Social Networking for Teens

    • Marketing Campaign
      • Gun-Free Action Game Apps

    • Marketing Campaign
      • Reviews for AI Apps and Tools

  • YouTube
    • YouTube Reviews and Lists
      • YouTube Channel Reviews
      • YouTube Kids Channels by Topic
    • Marketing Campaign
      • Parents' Ultimate Guide to YouTube Kids

    • Marketing Campaign
      • YouTube Kids Channels for Gamers

  • Parent Tips and FAQs
    • By Age
      • Preschoolers (2-4)
      • Little Kids (5-7)
      • Big Kids (8-9)
      • Pre-Teens (10-12)
      • Teens (13+)
    • By Topic
      • Screen Time
      • Learning
      • Social Media
      • Cellphones
      • Online Safety
      • Identity and Community
      • More ...
    • By Platform
      • TikTok
      • Snapchat
      • Minecraft
      • Roblox
      • Fortnite
      • Discord
      • More ...
    • What's New
      • How to Help Kids Build Character Strengths with Quality Media

      • Family Tech Planners
      • Digital Skills
      • All Articles
  • Celebrating Community
    • Menu for Latino Content
      • Latino Culture
      • Black Voices
      • Asian Stories
      • Native Narratives
      • LGBTQ+ Pride
      • Best of Diverse Representation List
    • FACE English Column 2
      • Multicultural Books

    • FACE English Column 3
      • YouTube Channels with Diverse Representations

    • FACE English Column 4
      • Podcasts with Diverse Characters and Stories

  • Donate

 

Stable Diffusion Logo

Stable Diffusion

By our AI Review Team .
Last updated August 6, 2024

Powerful image generator can unleash creativity, but is wildly unsafe and perpetuates harm

Overall Risk

High

Learn more

AI Type

Multi-Use

Learn more


 

DISCLAIMER: We will not link directly to Stable Diffusion, DreamStudio, or Stability AI in this review, as we do not consider this a safe tool in any way.  Why this matters.

 

What is it?

Stable Diffusion is a generative AI product created by Stability AI. It can create realistic images and art from a text-based description that can combine concepts, attributes, and styles. Stability AI's full suite of image editing tools offers users a sophisticated range of options: extending generated images beyond the original frame (outpainting), making authentic modifications to existing user-uploaded or AI-generated pictures, and incorporating or eliminating components while considering shadows, reflections, and textures (inpainting). Once users achieve the generated image they want, they can download and use it.

Stability AI released Stable Diffusion to the public in November 2022. It is powered by a massive data set of image-text pairs scraped from the internet. The data set includes a subset of 2.32 billion images that contain English text. It was created by LAION, which stands for "Large-scale Artificial Intelligence Open Network." LAION is a nonprofit organization that is funded in part by Stability AI.

Stability AI's hosted version of Stable Diffusion can be accessed via its cloud service DreamStudio. DreamStudio extends beyond text-to-image prompting by providing inpainting, outpainting, and image-to-image generation. Users purchase credits to pay for the computing cost of each request. Currently, $10 equals 1,000 credits, which Stability AI notes is ~5,000 images.

In addition, Stable Diffusion has made all model weights and code available. Anyone is able to access, download, and use the full model.

How it works

Stable Diffusion is a form of generative AI, which is an emerging field of artificial intelligence. Generative AI is defined by the ability of an AI system to create ("generate") content that is complex and coherent and original. For example, a generative AI model can create sophisticated writing or images.

Stable Diffusion uses a particular type of generative AI called "diffusion models," named for the process of diffusion to generate new content. Diffusion is a natural phenomenon you've likely experienced before. A good example of diffusion happens if you drop some food coloring into a glass of water. No matter where that food coloring starts, eventually it will spread throughout the entire glass and color the water in a uniform way. In the case of computer pixels, random motion of those pixels will always lead to "TV static." That is the image equivalent of food coloring creating a uniform color in a glass of water. A machine-learning diffusion model works by, oddly enough, destroying its training data by successively adding "TV static," and then reversing this to generate something new. They are capable of generating high-quality images with fine details and realistic textures.

Stable Diffusion combines a diffusion model with a text-to-image model. A text-to-image model is a machine learning algorithm that uses natural language processing (NLP), a field of AI that allows computers to understand and process human language. Stable Diffusion takes in a natural language input and produces an image that attempts to match the description.

Where it's best

  • Stable Diffusion has the potential to enable creativity and artistic expression, allow for visualization of new ideas, and create new concepts and campaigns.
  • Stability AI suggests that the best uses of Stable Diffusion include: generation of artworks and use in design and other artistic processes; applications in educational or creative tools; research on generative models; safe deployment of models that have the potential to generate harmful content; and probing and understanding the limitations and biases of generative models.

The biggest risks

  • Stable Diffusion's "view" of the world can shape impressionable minds, and with little accountability. Even when instructed to do otherwise, Stable Diffusion is susceptible to generating outputs that perpetuate harmful stereotypes, especially regarding race and gender. We confirmed this repeatedly with our own testing. These propensities toward harm are frighteningly powerful. The risk this poses to children especially, in terms of what they might see or be exposed to, is unfathomable. What happens to our children when they are exposed to the worldview of a biased algorithm repeatedly and over time? What view of the world will they assume is "correct," and how will this inform their interactions with real people and society? Who is accountable for allowing this to happen?
  • Stable Diffusion has been used to create child sexual abuse material (CSAM). Stable Diffusion has been used to create lifelike images—sometimes many thousands of them by a single bad actor—of child sexual abuse, including of the sexual abuse of babies and toddlers. These images have then been sold online. While Stable Diffusion's July 2023 update aimed to prevent it from generating some of the most objectionable content, the open source nature of the model allows for easy removal of those protections, or for older versions to be used, in applications built from the technology.
  • Inappropriate sexualized representations of women and girls harm all users. Despite many public failings, Stable Diffusion continues to easily produce inappropriately sexualized representations of women and girls, even with prompts seeking images of women professionals. This perpetuates harmful stereotypes, unfair bias, unrealistic ideals of women's beauty and "sexiness," and incorrect beliefs around intimacy for humans of all genders. Numerous studies have shown that greater exposure to images that promote the objectification of women adversely affects the mental and physical health of girls and women. Notably, while this is an issue for all image-to-text generators, it is especially harmful with Stable Diffusion. This is because of the combination of an uncurated data set and minimal protections, such as a refusal to generate images when it detects prompts that violate the company's terms of service.
  • Stable Diffusion consistently and easily reinforces harmful stereotypes. While Stable Diffusion's July 2023 update aimed to prevent it from generating some of the most objectionable content, this remains a significant risk. Recent findings show continued reinforcement of harmful stereotypes, and the manner in which Stability AI has open-sourced the model allows anyone to remove those protections in new applications. A great resource for exploring this problem further can be found at Stable Bias. Relevant articles:
    - Tiku, N., Schaul, K., & Chen, S.Y. (2023, Nov. 1). How AI is crafting a world where our worst stereotypes are realized. Washington Post.
    - Crawford, A., & Smith, T. (2023, June 28). Illegal trade in AI child sex abuse images exposed. BBC.
    - Harlan, E., & Brunner, K. (2023, June 7). We are all raw material for AI. BR24.
    - Nicoletti, L., & Bass, D. (June 2023). Humans are biased. Generative AI is even worse. Bloomberg.
    - Vincent, J. (2023, Jan. 16). I art tools Stable Diffusion and Midjourney targeted with copyright lawsuit. The Verge.
    - Edwards, B. (2022, Sept. 21). Artist finds private medical record photos in popular AI training data set. Ars Technica.
    - Wiggers, K. (2022, Aug. 24). Deepfakes for all: Uncensored AI art model prompts ethics questions. TechCrunch.
    - Wiggers, K. (2022, Aug. 12). This startup is setting a DALL-E 2-like AI free, consequences be damned. TechCrunch.
  • Stable Diffusion's advanced inpainting and outpainting features present new risks. While innovative and useful in many contexts, the high degree of freedom to alter images means they can be used to perpetuate harms and falsehoods. Images that have been changed to, for example, modify, add, or remove clothing, or add additional people to an image in compromising ways, could be used to either directly harass or bully an individual, or to blackmail or exploit them. These features can also be used to create images that intentionally mislead and misinform others. For example, disinformation campaigns can remove objects or people from images or create images that stage false events.

Limits to use

  • We did not receive participatory disclosures from Stability AI for Stable Diffusion. This assessment is based on publicly available information, our own testing and our review process.
  • Those who choose to use Stable Diffusion should educate themselves on best practices in prompting to ensure responsible use to the best extent possible. Resources like this that were created for DALL-E, another text-to-image generative AI model, can help.

 

Common Sense AI Principles Assessment

The benefits and risks, assessed with our AI Principles - that is, what AI should do.

Resources like this</a> that were created for DALL-E, another text-to-image generative AI model, can help.</li> </ul> ">
adversely affects the mental and physical health</a> of girls and women. Notably, while this is an issue for all image-to-text generators, it is especially harmful with Stable Diffusion. This is because of the combination of an uncurated data set and minimal protections, such as a refusal to generate images when it detects prompts that violate the company's terms of service.</li> <li style="line-height:1.5;margin-bottom:5px;">Even when instructed to do otherwise, Stable Diffusion is susceptible to generating outputs that <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.washingtonpost.com/technology/interactive/2023/ai-generated-images-bias-racism-sexism-stereotypes/">perpetuate harmful stereotypes</a>, especially regarding <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.bloomberg.com/graphics/2023-generative-ai-bias/">race and gender</a>. A great resource for exploring this further can be found at <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://huggingface.co/spaces/society-ethics/StableBias">Stable Bias</a>. Our own testing confirmed this and the ease with which these outputs are generated. Some examples of what we found include: <ul> <li style="line-height:1.5;margin-bottom:5px;">Stable Diffusion attributed being "attractive" to White faces, "emotional" to female faces, "thug" to Black male faces, "terrorist" to stereotypes of Middle Eastern male faces, and "housekeeper" to Black and Brown females.</li> <li style="line-height:1.5;margin-bottom:5px;">When asked to generate images of a "poor White person," Stable Diffusion would often generate images of Black men. When asked to pair non-White ethnicities with wealth, Stable Diffusion struggled to do so. Instead, it generated images associated with poverty or severely degraded images.</li> <li style="line-height:1.5;margin-bottom:5px;">Stable Diffusion reflected and amplified statistical gender stereotypes for occupations (e.g., only female flight attendants and stay-at-home parents, male chefs, female cooks, male software developers).</li> </ul> </li> <li style="line-height:1.5;margin-bottom:5px;">Stable Diffusion struggles to represent ideas and people that do not appear in its training data, leading to disparate performance. This bias requires some users, especially those in marginalized groups, to be very specific in their prompts, while others find the tool intuitively tailored to their needs. This can also result in inferior images for outputs describing concepts outside of the training data set.</li> <li style="line-height:1.5;margin-bottom:5px;">It is very easy to unwittingly produce images that reinforce unfair bias and stereotypes using Stable Diffusion. This can shape users' beliefs and worldview about what is "good" and "normal."</li> </ul> ">
liar's dividend</a>" could erode trust to the point where democracy or civic institutions are unable to function.</li> <li style="line-height:1.5;margin-bottom:5px;">Stable Diffusion does not appear to add watermarks to users' images, which removes barriers to the spread of misinformation and harmful stereotypes.</li> <li style="line-height:1.5;margin-bottom:5px;">While this would be a violation of Stable Diffusion's terms of service, it would be very easy to generate images that could be used in misinformation and disinformation campaigns. Many of the organizations responsible for text-to-image generative AI models take steps to avoid the potential to depict public figures. By contrast, Stable Diffusion is capable of generating new content that depicts public figures. This makes it very easy to use it to create <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://techcrunch.com/2022/08/24/deepfakes-for-all-uncensored-ai-art-model-prompts-ethics-questions/">deepfakes. </ul> ">
highly sensitive personally identifiable information</a> (PII).</li> <li style="line-height:1.5;margin-bottom:5px;">Many of the organizations responsible for text-to-image generative AI models take steps to avoid the potential to depict public figures. By contrast, Stable Diffusion is capable of generating new content that depicts public figures. This makes it very easy to use it to create <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://techcrunch.com/2022/08/24/deepfakes-for-all-uncensored-ai-art-model-prompts-ethics-questions/">deepfakes. <li style="line-height:1.5;margin-bottom:5px;"><span style="background-color:transparent;color:#2f2f2e;font-family:Lato,sans-serif;font-size:12pt;font-style:normal;font-variant:normal;font-weight:400;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">At the time of this review, there are no age or terms of service gates when signing up to use Stable Diffusion on DreamStudio. Unless a user seeks out the terms for themselves, they do not know what is and isn't allowed. This also means that there are no protections for children and teens from general use of the product.</span></li> </ul> <p><em>This review is distinct from Common Sense's privacy </em><a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://privacy.commonsense.org/resource/evaluation-process">evaluations and </em><a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://privacy.commonsense.org/resource/privacy-ratings">ratings, which evaluate privacy policies to help parents and educators make sense of the complex policies and terms related to popular tools used in homes and classrooms across the country.</em></p> ">
sometimes many thousands of them by a single bad actor</a>—of child sexual abuse, including of the sexual abuse of babies and toddlers. <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.bbc.com/news/uk-65932372">These images have then been sold online</a>.</li> <li style="line-height:1.5;margin-bottom:5px;">Stable Diffusion's "view" of the world can shape impressionable minds, and with little accountability. Even when instructed to do otherwise, Stable Diffusion is susceptible to generating outputs that <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.washingtonpost.com/technology/interactive/2023/ai-generated-images-bias-racism-sexism-stereotypes/">perpetuate harmful stereotypes</a>, especially regarding <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.bloomberg.com/graphics/2023-generative-ai-bias/">race and gender</a>. We confirmed this repeatedly with our own testing.</li> <li style="line-height:1.5;margin-bottom:5px;">Stable Diffusion has not been designed in any specific way to protect children. Stable Diffusion has been found to be able to output images that can emotionally and psychologically harm users, perpetuate harmful stereotypes, and promote mis/disinformation.</li> </ul> ">
  • Put People First

    High risk

     

    • While Stable Diffusion is very easy and intuitive to use, the ethical risks described throughout this review make this ease of use even more problematic.
    • Those who choose to use Stable Diffusion should educate themselves on best practices in prompting to ensure responsible use to the best extent possible. Resources like this that were created for DALL-E, another text-to-image generative AI model, can help.
  • Be Effective

    High risk

     

    • The risk of exposure to unsafe content generated by Stable Diffusion is so high that we do not recommend direct use of this tool in any learning environment.
    • Users should not attempt to use Stable Diffusion to output images to visualize any process or scene that requires accuracy.
    • It is extremely easy when using Stable Diffusion to unwittingly produce images that reinforce unfair bias and stereotypes.
  • Prioritize Fairness

    High risk

     

    • Despite many public failings, Stable Diffusion continues to produce inappropriately sexualized representations of women and girls, even with neutral prompts or prompts seeking images of women professionals. Numerous studies have shown that greater exposure to images that promote the objectification of women adversely affects the mental and physical health of girls and women. Notably, while this is an issue for all image-to-text generators, it is especially harmful with Stable Diffusion. This is because of the combination of an uncurated data set and minimal protections, such as a refusal to generate images when it detects prompts that violate the company's terms of service.
    • Even when instructed to do otherwise, Stable Diffusion is susceptible to generating outputs that perpetuate harmful stereotypes, especially regarding race and gender. A great resource for exploring this further can be found at Stable Bias. Our own testing confirmed this and the ease with which these outputs are generated. Some examples of what we found include:
      • Stable Diffusion attributed being "attractive" to White faces, "emotional" to female faces, "thug" to Black male faces, "terrorist" to stereotypes of Middle Eastern male faces, and "housekeeper" to Black and Brown females.
      • When asked to generate images of a "poor White person," Stable Diffusion would often generate images of Black men. When asked to pair non-White ethnicities with wealth, Stable Diffusion struggled to do so. Instead, it generated images associated with poverty or severely degraded images.
      • Stable Diffusion reflected and amplified statistical gender stereotypes for occupations (e.g., only female flight attendants and stay-at-home parents, male chefs, female cooks, male software developers).
    • Stable Diffusion struggles to represent ideas and people that do not appear in its training data, leading to disparate performance. This bias requires some users, especially those in marginalized groups, to be very specific in their prompts, while others find the tool intuitively tailored to their needs. This can also result in inferior images for outputs describing concepts outside of the training data set.
    • It is very easy to unwittingly produce images that reinforce unfair bias and stereotypes using Stable Diffusion. This can shape users' beliefs and worldview about what is "good" and "normal."
  • Help People Connect

    High risk

     

    • With close monitoring and oversight, Stable Diffusion can offer a unique way to boost social interaction and understanding. It can enable those with limited artistic talent to convey their ideas creatively and aid in visual storytelling.
    • It is very easy to use Stable Diffusion to generate images that can harm individuals and groups. On their own, generated images can reinforce harmful stereotypes about identity and occupation, and dehumanize individuals or groups. These could further be used to incite or promote hatred or disseminate disinformation. This can happen with an ease and speed that creates special concern for use of Stable Diffusion, regardless of whether these activities are against the terms of service.
    • DreamStudio offers advanced features like inpainting, outpainting, and image-to-image generation. These features present new risks. The high degree of freedom to alter images means that they can be used to dehumanize or incite hatred against individuals or groups.. Images that have been changed to, for example, modify, add, or remove clothing, or add additional people to an image in compromising ways, could be used to either directly harass or bully an individual, or to blackmail or exploit them.
  • Be Trustworthy

    High risk

     

    • Stable Diffusion can easily generate or enable false or misleading content. These features can also be used to create images that intentionally mislead and misinform others. For example, misinformation campaigns can remove objects or people from images or create images that stage false events. Because Stability AI has taken minimal efforts to limit this, and images can be further manipulated with generative AI via in- and outpainting, false and harmful visual content can be generated at an alarming speed. As image generators become more capable, it may become increasingly difficult to separate fact from fiction. This "liar's dividend" could erode trust to the point where democracy or civic institutions are unable to function.
    • Stable Diffusion does not appear to add watermarks to users' images, which removes barriers to the spread of misinformation and harmful stereotypes.
    • While this would be a violation of Stable Diffusion's terms of service, it would be very easy to generate images that could be used in misinformation and disinformation campaigns. Many of the organizations responsible for text-to-image generative AI models take steps to avoid the potential to depict public figures. By contrast, Stable Diffusion is capable of generating new content that depicts public figures. This makes it very easy to use it to create deepfakes.
  • Use Data Responsibly

    High risk

     

    • Because the data set used to power Stable Diffusion is uncurated, it has generated content that includes images with highly sensitive personally identifiable information (PII).
    • Many of the organizations responsible for text-to-image generative AI models take steps to avoid the potential to depict public figures. By contrast, Stable Diffusion is capable of generating new content that depicts public figures. This makes it very easy to use it to create deepfakes.
    • At the time of this review, there are no age or terms of service gates when signing up to use Stable Diffusion on DreamStudio. Unless a user seeks out the terms for themselves, they do not know what is and isn't allowed. This also means that there are no protections for children and teens from general use of the product.

    This review is distinct from Common Sense's privacy evaluations and ratings, which evaluate privacy policies to help parents and educators make sense of the complex policies and terms related to popular tools used in homes and classrooms across the country.

  • Keep Kids & Teens Safe

    High risk

     

    • Stable Diffusion has been used used to create lifelike images—sometimes many thousands of them by a single bad actor—of child sexual abuse, including of the sexual abuse of babies and toddlers. These images have then been sold online.
    • Stable Diffusion's "view" of the world can shape impressionable minds, and with little accountability. Even when instructed to do otherwise, Stable Diffusion is susceptible to generating outputs that perpetuate harmful stereotypes, especially regarding race and gender. We confirmed this repeatedly with our own testing.
    • Stable Diffusion has not been designed in any specific way to protect children. Stable Diffusion has been found to be able to output images that can emotionally and psychologically harm users, perpetuate harmful stereotypes, and promote mis/disinformation.
  • Be Transparent & Accountable

    High risk

     

    • The open source nature of Stable Diffusion means that there is significant transparency.
    • Users are able to exert human control over images they produce with Stable Diffusion by modifying prompts to effect change in the generated outputs.
    • The effects of bias and potential harm from images produced by Stable Diffusion can vary based on context, complicating the assessment and mitigation process during image creation. Additionally, content filters can fail to fully capture images that are ethically dubious or violate Stable Diffusion's guidelines, because the potential misuse is more a function of the context in which the image can be used (e.g., disinformation, harassment, bullying, etc.) and not the image itself. Currently, the challenge of identifying deepfakes and determining whether images have been created using Stable Diffusion and products like it remains an unresolved issue, leaving a gap in our ability to mitigate the potential consequences of harmful situations when they occur in the real world. Importantly, harm doesn't require a bad actor intending to misuse the product. For example, something intended to be shared in private may be innocuous unless and until it is seen publicly. This makes it incredibly difficult, if not impossible, for programmatic efforts like policy enforcement, prompt refusals, and even human review to catch and stop content that looks fine but ultimately is not.
    • The available transparency information on popular repositories is not easy for a non-technical audience to understand. 
    • Stable Diffusion can, and has, caused real harm to people, and is not subject to meaningful human control in these instances.
    • There are insufficient mechanisms for remediation when harm does happen.


 

 

Additional Resources

Edtech Ratings

Apps and websites for making posters and collages

For Families

Helping kids navigate the world of artificial intelligence 

Free Lessons

AI Literacy for Grades 6–12

 

 

See All AI Reviews

See Next Review

 

Common Sense is dedicated to improving the lives of kids and families by providing the trustworthy information, education, and independent voice they need to thrive.

We're a nonprofit. Support our work

  • About
    • Column 1
      • Our Work and Impact
      • How We Work
      • Diversity & Inclusion
      • Meet Our Team
      • Board of Directors
      • Board of Advisors
      • Our Partners
      • Our Offices
      • Press Room
      • Annual Report
      • Contact Us
  • Learn More
    • Column 1
      • Common Sense Media
      • Common Sense Education
      • Digital Citizenship Program
      • Family Engagement Program
      • Privacy Program
      • Research Program
      • Advocacy Program
  • Get Involved
    • Column 1
      • Donate
      • Join as a Parent
      • Join as an Educator
      • Join as an Advocate
      • Get Our Newsletters
      • Request a Speaker
      • Partner With Us
      • Events
      • Apply for Free Internet
      • We're Hiring

Follow Common Sense Media

  • Facebook
  • Twitter
  • Instagram
  • YouTube
  • LinkedIn
Contact us / Privacy / / Terms of use / Community guidelines
© Common Sense Media. All rights reserved. Common Sense and other associated names and logos are trademarks of Common Sense Media, a 501(c)(3) nonprofit organization (FEIN: 41-2024986).