"Sticks and stones may break my bones, but words will never hurt me."
It's a saying commonly heard on the school playground, reportedly originating from an 1862 publication from the African Methodist Episcopal Church.
- 'Predator-trolling' and white supremacy: How social media led to Christchurch terror attack
- Facebook has 'too much power over speech' - Mark Zuckerberg
- Facebook's white nationalist content ban will push hate 'underground' - Free Speech Coalition
Although the phrase was meant to persuade children not to be concerned about name-calling, it's now thrown around on social media to convince users they shouldn't feel hurt by vitriolic language.
With people becoming accustomed to seeing abusive terms online, it's easy to forget how traumatising they can be for some, especially for vulnerable minorities.
The recent terror attacks in Christchurch also highlight that we cannot assume people spouting patently racist, sexist and inappropriate language online are harmless keyboard warriors.
Experts have questioned whether the attacks could have been prevented if intelligence agencies had placed greater focus on white supremacists and digital citizens who believe they can discriminate without repercussions.
Others suggest people who let inappropriate language or casual discrimination slide enable these abusers, emboldening them and their disgruntled ideology.
It reflects a growing push for the notion that words are harmless, to be left in the past.
Silence is enabling
In the wake of the events of March 15, experts and academics debated the foresight New Zealand authorities had in predicting the attacks and the intensity of focus on right-wing extremists.
The attack also prompted Kiwis to reflect on racism in New Zealand; with many social media users considering how they may have previously let discriminatory language in their newsfeeds go unchallenged.
The alleged gunman was a frequent contributor to online forums and social media platforms, regularly posting white supremacist material, and often without being pulled up on it.
After the tragedy, security expert Paul Buchanan described those who let abusive commentary slide as "enablers", complicit in allowing people to perpetrate such atrocities.
"It's the vast pool of enablers who sit silently in front of people who are ranting and raving and I think [this is the] pool that allows for these people to perpetrate these acts," he told The AM Show.
In February, Australian politician Mehreen Faruqi wrote in The Guardian about abuse she had experienced and how she was told if she didn't give trolls attention, they would go away.
But she said it's important to remember the person on social media perpetrating the abuse is real, and their disgruntled views need to be shut down before they influence others.
"Doing nothing causes harm. Exposing the messages and the messengers lays bare what so many of us experience more and more. It helps others to speak out. It helps build a community of supporters who make it harder for bullying behaviour to continue," she wrote.
This reflects the view that silence is a form of enabling, allowing extremists to believe their views are unopposed and, consequently, acceptable.
A phrase frequently floated following the March 15 attacks noted while not all hate speech ends in violence, all violence starts with hate speech.
The definition of hate speech has long been debated, but generally includes anything that spreads, promotes or justifies hatred, violence or discrimination against a person or group of people.
Psychologist Sara Chatwin says the opportunity for people with similarly derogatory perspectives to come together online has allowed dark thoughts to fester and be emboldened.
"With the onset of social media and the huge amount of usage and growth, we are seeing far more offensive terms and we are seeing people who are inflamed and really aggressive about commenting," she told Newshub.
"People who perhaps wouldn't have had a say before or would have kept their comments to themselves are now okay in being quite denigrating and attacking others.
"I have had clients who have been trolled for three or five years, and have had to put up with these most disgusting comments from only one or two people, but one or two people who are anonymous and one or two people who use social media because it gives them the confidence to do that - we wouldn't have had that 20 years ago."
She says letting this denigration go unchallenged creates the perception it is normal and accepted - something which should stop.
"The way that we have just let social media technology just sift over us and we just accept it, we just let it happen, it is not acceptable.
"We have become anaesthetised to threats and all the rest of it because it just happens and we are used to it. We are just getting used to it, which is just really awful."
Emotional harm of language
Every person has their own triggers - reminders of personal experiences and historical events significant to the individual that may cause upset.
Chatwin says we have a duty to recognise the potential hurt our words can have, and work with one another to understand each other's back stories and tolerance levels.
"I think words are very powerful, so I do think that negative terms and denigrating terms can have a negative impact on people's self-worth and self-esteem," Chatwin says.
"What affects one person adversely, might not even affect another person. What really gets to a person's core may not have any impact on their brother or another related person. We all just have different responses and reactions to different terms."
"I think it is really helpful to have an awareness of what grinds people's gears or what sets people off because you don't necessarily want to navigate those waters too often."
Continuing to subject someone to terms they have signalled they believe are offensive could be considered bullying - regardless of if the perpetrator views the terms as hurtful.
What is said on social media also often never disappears, creating a constant reminder of the attack for the victim.
"Any negative effect [online] can be a very prolonged negative effect, which isn't good for anyone," says Chatwin.
"Social media has that urgency, that accessibility, and also the anonymity value where anybody can say anything about anyone and it doesn't have to be true.
"Today, I think that people are a little more militant about voicing their opinions and more aggressive and social media has allowed that to happen because you can do it with anonymity."
But it's not just explicit discrimination that must be challenged.
Casual discrimination, often found within dynamic political debate online, must also be stamped out.
The Human Rights Commission tackled casual racism through its internationally acclaimed Give Nothing To Racism campaign.
Human Rights Commissioner Paul Hunt told Newshub it can refer to anything from an "awful, awful joke" or the "raised eyebrow or the put down".
He compared the effect of the terms to the iconic Gulliver's Travels image of Gulliver with hundreds of ropes thrown over him.
"The casual racism are those little ropes and collectively they can tie down powerful people and they do hurt and they do constrain and they do almost disenfranchise, and we have got to find ways of tackling it," says Mr Hunt.
However, Hunt told Newshub Kiwis are becoming better at stepping up and calling people out.
"We have now reached a point in New Zealand, where we are mature enough to identify these things and call them out," he says.
"Forty years ago, they were commonplace and unremarked and they weren't challenged, for the most part."
But the Christchurch attacks prove more can be done in ensuring people making dangerous remarks realise their views don't reflect the majority.
Hunt encourages people who see abuse, whether it be online or in real life, to stand up and support the victim.
"My view is, especially in the context of social media, is that the first strategy is to support the victims, record the incident, and report to the commission.
"We have got a whole department here that receives inquiries and receives a report, and around a third of our complaints are around racism discrimination."
According to Netsafe, racist comments or content online that is not directed at an individual should be reported to the platform that it's on.
Content that indicates that a person/people are in danger or that a crime is being committed should be reported to the Police immediately by calling 111.
But Chatwin questions if authorities have enough power to stop abuse online, saying that while people may want to speak up, a lack of concrete, identifiable consequence can be demotivating.
"I am a big fan of if you feel that someone is wronged and something is going on, online or anywhere, you talk up.
"Most of us don't like what is going on, but there are no teeth anywhere to actually enforce anything so people just keep on trolling, and bullying, and posting, and being offensive behind the keyboard," she told Newshub.
One answer has been the Harmful Digital Communications Act (HDC), introduced in July 2015 to regulate and investigate claims of online abuse - including disclosing sensitive personal facts, harassing a person or inciting them to commit suicide.
There have been more than 135 convictions under the Act since it was introduced.