Online platforms host vast amounts of user-generated content, including comments and reviews, which may sometimes be defamatory, false, or harmful. The liability of platforms for such content depends on legal provisions like the IT Act’s safe harbor rules and the platform’s role in content moderation. Understanding these rules helps clarify when platforms can be held accountable and the extent of their responsibilities.
Platforms are considered intermediaries and generally enjoy safe harbor protection if they act as passive conduits without editorial control, provided they follow due diligence and remove unlawful content upon notice.
Platforms must implement mechanisms to address complaints about illegal or defamatory content quickly once notified. Failure to do so may result in losing safe harbor protection.
If a platform exercises editorial control over user content or knowingly allows harmful material to persist, it may lose immunity and be liable.
Users posting defamatory or false reviews are primarily responsible, but platforms can be liable if they do not remove such content after due notice.
Different jurisdictions have varying rules—e.g., the EU’s Digital Services Act imposes stricter duties on platforms to monitor and manage content.
Due to the volume of content, platforms rely on automated filters and user reporting to manage liability risks.
A user posts a false and damaging review about a small business on an e-commerce platform. The business requests removal of the review, but the platform delays action.
Discover clear and detailed answers to common questions about Cyber and Technology Law. Learn about procedures and more in straightforward language.