Skip to content


How TechCrunch Enabled Real Conversations
209,291 comments processed in 22 months
27,337 harmful comments removed in 22 months
99 issues solved in 22 months
For major publishers, it's a challenge to moderate comments


Comments sections on social media provide publishers the opportunity to engage, deepen relationships with readers, and create a space for dialogue.

However, comment feeds on social media can quickly turn into a negative space where hate speech, spam, and abusive comments run rampant, making it a challenge to moderate engagement at scale.

Publishers need to establish a safe and protected space for open dialogue and discussion with the help of moderation experts and technology.
Despite our best efforts to contain them, trolls are a persistent group and keep managing to slip through the gates. - CNN
...horribly violent rape gifs... were consistently appearing in our comments.  - Gawker Media
...the idea of comments on a website must give way to new realities of behavior in the marketplace. - Reuters
We believe that social media is the new arena for commenting, replacing the old onsite approach that dates back many years. - Recode 

About The Brand

TechCrunch, one of the biggest online media companies, enabled real conversations by collaborating with BrandBastion to remove spam, hate speech, and other harmful content in real-time 24/7. Additionally, TechCrunch received email alerts for urgent situations and gained valuable insights into engagement on their website.

Take Quick Action When Facing a Negative Backlash


  • TechCrunch has been committed to open discourse since its inception, but hate speech, spam, and trolls forced TechCrunch to experiment with various commenting platforms.

  • The company needed to combat the problems spam and hate speech had provoked. Every article was getting hit with at least one piece of spam, making the comments section look less like a place for quality discussion.
Quality of comments is important to readers. If they go to the comments section and it's a dumpster fire of spam and hate, it might not be a place where they feel compelled to contribute. If a user goes to a comments section and sees quality discussion, he or she is much more likely to participate in the community.- Travis Bernard
Director of Audience Development, TechCrunch
BrandBastion Safety


How BrandBastion handles moderation:
  • Trained AI and human content specialists detect all harmful content and potential issues. Harmful comments are then automatically removed, leaving the comments section hate-free and spam-free.
  • Email or text message alerts are sent for situations that require a client's attention (such as issues with posts or articles or a sudden increase in negative sentiment) to ensure that the client knows what is going on in the comments section 24/7.
  • Monthly reports are sent detailing all content removed during the month.
What harmful comments were removed?
  • Spam, malware, and scams
  • Violent, offensive, or inappropriate content such as hate speech
What kind of issues were addressed?
  • Technical issues with the article (mobile formatting, dead hyperlinks, etc)
  • Large volumes of negative reactions to the article
  • Typos for company names or employees
How moderation is traditionally done
  • Pre-moderation. A large team of moderators reviews and approves every commenter or comment before it gets published. This is a time-consuming and costly process that is not feasible as a company scales up.
  • Crowdsourcing. This approach involves leaving moderation to the community, usually through social commenting plugins. This solution is often a band-aid, failing to solve the entire problem. Not to mention, engaged users are the ones that should be protected from hate speech. The downfall to this approach is that the brands with the most social media engagement have to police the worst comments.

For a site as large as TechCrunch, it’s important to have someone looking through comments and listening to what the community is saying. BrandBastion has made a seamless integration into our team, and I highly recommend their services to anyone that might be interested.

Travis BernardDirector of Audience Development, TechCrunch
A safe space is created for conversations to take place


Over a period of 22 months, BrandBastion enabled the following for TechCrunch: 209,291 comments processed, 27,337 harmful comments removed, and 99 issues solved.

By ensuring that harmful content was removed in real-time 24/7, the comment sections remained free from spam, scam, hate speech or other harmful content, allowing readers to have real conversations about the topics at hand.

  • Spam, Scam and Malware: 24,000 comments were removed
  • Violent, Offensive or Inappropriate: 500 comments were removed
Email alerts are sent 24/7 in the case of any issues
  • Email alerts of situations requiring the TechCrunch team’s attention were sent in real-time 24/7, ensuring that the team was always aware of what was being flagged by the commenters. In the same 22 month period, BrandBastion helped solve 99 issues, including: 17 Typo in article, 56 Technical issue with article, 26 Other.
  • According to the Edelman Trust Barometer Global Report, 66% of people believe the media is more concerned with attracting a big audience than reporting.  Technical issues, broken links, and typos in articles can all erode reader trust. These types of incidents cannot always be avoided, but with the right safeguards in place they can be corrected in a timely manner prior to them escalating.
209,291 comments processed in 22 months
27,337 harmful comments removed in 22 months
99 issues solved in 22 months

We can help you too.

Book a discovery meeting to understand how BrandBastion can help you achieve your goals.