Hundreds of posts spreading misinformation about Covid-19 are being left online, according to a report from the Center for Countering Digital Hate.
Some 649 posts were reported to Facebook and Twitter, including false cures, anti-vaccination propaganda and conspiracy theories around 5G.
90% remained visible online afterwards without any warnings attached, the report suggests.
Facebook said the sample was «not representative».
A spokesperson for Facebook said; «We are taking aggressive steps to remove harmful misinformation from our platforms and have removed hundreds of thousands of these posts, including claims about false cures.
«During March and April we placed warning labels on around 90 million pieces of content related to Covid-19 and these labels stopped people viewing the original content 95% of the time.
«We will notify anyone who has liked, shared or commented on posts related to Covid-19 that we’ve since removed.»
Twitter said that it was prioritising the removal of Covid-19 content «when it has a call to action that could potentially cause harm».
«As we’ve said previously, we will not take enforcement action on every Tweet that contains incomplete or disputed information about Covid-19. Since introducing these new policies on March 18 and as we’ve doubled down on tech, our automated systems have challenged more than 4.3 million accounts which were targeting discussions around Covid-19 with spammy or manipulative behaviours.»
Imran Ahmed, chief executive of the Center for Countering Digital Hate, said the firms were «shirking their responsibilities».
«Their systems for reporting misinformation and dealing with it are simply not fit for purpose.
«Social media giants have claimed many times that they are taking Covid-related misinformation seriously, but this new research shows that even when they are handed the posts promoting misinformation, they fail to take action.»
Rosanne Palmer-White, director of youth action group Restless Development, which also took part in the survey, said young people were «doing their bit to stop the spread of misinformation» but social media firms were «letting them down».
Both Twitter and Facebook face questions from the UK’s Digital Culture Media and Sport sub-committee on the way they are handling coronavirus misinformation.
MPs were not happy with an earlier session. They demanded more detailed answers and said more senior executives should attend the next meeting.
For the study, ten volunteers from the UK, Ireland and Romania searched social media for misinformation from the end of April to the end of May.
They found posts suggesting sufferers can get rid of coronavirus by drinking aspirin dissolved in hot water or by taking zinc and vitamin C and D supplements,
Twitter was deemed the least responsive, with only 3% of the 179 posts acted upon.
Facebook removed 10% of the 334 posts reported and flagged another 2% as false. Instagram, which Facebook owns, acted on 10% of the 135 complaints it was sent.
Both the social networks insist they have made efforts to bring fake news about the coronavirus under control.
Twitter has begun labelling tweets that spread misinformation about Covid-19. Facebook has also removed some content, including from groups claiming the rollout of the 5G network was a cause of the spread of the virus.
By Marianna Spring, Specialist disinformation and social media reporter
All eyes have been on how social media sites have tackled misleading information on their platforms in recent weeks – and all eyes will be on them again today, as they’re grilled by MPs.
Over the course of the pandemic, different social media companies have made a number of changes to their policies to try to tackle harmful and misleading information. Facebook and YouTube both say they have cracked down on conspiracies that could do damage.
And in a high-profile move, Twitter decided to label a misleading tweet by US President Donald Trump – although it was one about postal voting rather than coronavirus.
But these changes in policy don’t seem easy to implement. In practice, misleading posts are often not reported – or when they are, they are not always removed. The question of the harm different posts pose appears to be at the root of this problem.
Messages posing an immediate threat to life are removed more quickly. However, misleading messages that pose a less immediate threat can prove to be just as dangerous – including those from anti-vaccination groups.
A BBC Investigation into the human cost of misinformation found that the potential for indirect harm caused by conspiracies and bad information that undermine public health messaging – or an effective vaccine – could be huge.
And as misinformation about protests and other news events floods social media, it becomes apparent that the pandemic is just one of many battles against misinformation to be fought.