The UK government has kicked off the new year with another warning shot across the bows of social media giants.
In an interview with the Sunday Times newspaper, security minister Ben Wallace hit out at tech platforms like Facebook and Google, dubbing such companies “ruthless profiteers” and saying they are doing too little to help the government combat online extremism and terrorism despite hateful messages spreading via their platforms.
“We should stop pretending that because they sit on beanbags in T-shirts they are not ruthless profiteers. They will ruthlessly sell our details to loans and soft-porn companies but not give it to our democratically elected government,” he said.
Wallace suggested the government is considering a tax on tech firms to cover the rising costs of policing related to online radicalization.
“If they continue to be less than co-operative, we should look at things like tax as a way of incentivizing them or compensating for their inaction,” he told the newspaper.
Although the minister did not name any specific firms, a reference to encryption suggests Facebook-owned WhatsApp is one of the platforms being called out (the UK’s Home Secretary has also previously directly attacked WhatsApp’s use of end-to-end encryption as an aid to criminals, as well as repeatedly attacking e2e encryption itself).
“Because of encryption and because of radicalization, the cost… is heaped on law enforcement agencies,” Wallace said. “I have to have more human surveillance. It’s costing hundreds of millions of pounds. If they continue to be less than co-operative, we should look at things like tax as a way of incentivizing them or compensating for their inaction.
“Because content is not taken down as quickly as they could do, we’re having to de-radicalize people who have been radicalized. That’s costing millions. They can’t get away with that and we should look at all options, including tax,” he added.
Last year in Europe the German government agreed a new law targeting social media firms over hate speech takedowns. The so-called NetzDG law came into effect in October — with a three-month transition period for compliance (which ended yesterday). It introduces a regime of fines of up to €50M for social media platforms that fail to remove illegal hate speech after a complaint (within 24 hours in straightforward cases; or within seven days where evaluation of content is more difficult).
UK parliamentarians investigating extremism and hate speech on social platforms via a committee enquiry also urged the government to impose fines for takedown failures last May, accusing tech giants of taking a laissez-faire approach to moderating hate speech.
Tackling online extremism has also been a major policy theme for UK prime minister Theresa May’s government, and one which has attracted wider backing from G7 nations — converging around a push to get social media firms to remove content much faster.
Responding now to Wallace’s comments in the Sunday Times, Facebook sent us the following statement, attributed to its EMEA public policy director, Simon Milner:
Mr Wallace is wrong to say that we put profit before safety, especially in the fight against terrorism. We’ve invested millions of pounds in people and technology to identify and remove terrorist content. The Home Secretary and her counterparts across Europe have welcomed our coordinated efforts which are having a significant impact. But this is an ongoing battle and we must continue to fight it together, indeed our CEO recently told our investors that in 2018 we will continue to put the safety of our community before profits.
In the face of rising political pressure to do more to combat online extremism, tech firms including Facebook, Google and Twitter set up a partnership last summer focused on reducing the accessibility of Internet services to terrorists.
This followed an announcement, in December 2016, of a shared industry hash database for collectively identifying terror accounts — with the newer Global Internet Forum to Counter Terrorism intended to create a more formal bureaucracy for improving the database.
But despite some public steps to co-ordinate counter-terrorism action, the UK’s Home Affairs committee expressed continued exasperation with Facebook, Google and Twitter for failing to effectively enforce their own hate speech rules in a more recent evidence session last month.
Though, in the course of the session, Facebook’s Milner, claimed it’s made progress on combating terrorist content, and said it will be doubling the number of people working on “safety and security” by the end of 2018 — to circa 20,000.
In response to a request for comment on Wallace’s remarks, a YouTube spokesperson emailed us the following statement:
Violent extremism is a complex problem and addressing it is a critical challenge for us all. We are committed to being part of the solution and we are doing more every day to tackle these issues. Over the course of 2017 we have made significant progress through investing in machine learning technology, recruiting more reviewers, building partnerships with experts and collaboration with other companies through the Global Internet Forum.
In a major shift last November YouTube broadened its policy for taking down extremist content — to remove not just videos that directly preach hate or seek to incite violence but also take down other videos of named terrorists (with exceptions for journalistic or educational content).
The move followed an advertiser backlash after marketing messages were shown being displayed on YouTube alongside extremist and offensive content.
Answering UK parliamentarians’ questions about how YouTube’s recommendation algorithms are actively pushing users to consume increasingly extreme content — in a sort of algorithmic radicalization — Nicklas Berild Lundblad, EMEA VP for public policy, admitted there can be a problem but said the platform is working on applying machine learning technology to automatically limit certain videos so they would not be algorithmically surfaceable (and thus limit their ability to spread).
Twitter also moved to broaden its hate speech policies last year — responding to user criticism over the continued presence of hate speech purveyors on its platform despite having community guidelines that apparently forbid such conduct.
A Twitter spokesman declined to comment on Wallace’s remarks.
Speaking to the UK’s Home Affairs committee last month, the company’s EMEA VP for public policy and communications, Sinead McSweeney, conceded that it has not been “good enough” at enforcing its own rules around hate speech, adding: “We are now taking actions against 10 times more accounts than we did in the past.”
But regarding terrorist content specifically, Twitter reported a big decline in the proportion of pro-terrorism accounts being reported on its platform as of September, along with apparent improvements in its anti-terrorism tools — claiming 95 per cent of terrorist account suspensions had been picked up by its systems (vs manual user reports).
It also said 75 per cent of these accounts were suspended before they’d sent their first tweet.
Twitter is making good on its pledge to fight the persistent problems of spam, bots, harassment and misinformation that have plagued the social platform for years. Today, in its generally positive Q1 earnings report, the company announced that changes it has made related to TweetDeck and its API — two of the most common spam vectors on Twitter — in the past quarter have translated into real numbers that point to overall improvements in quality on the service.
Specifically, according to figures published in the company’s letter to investors, 142,000 apps, accounting for 130 million tweets, have had their API access revoked; and there are now 90 percent fewer accounts using TweetDeck to create junk tweets.
To note, Twitter’s new changes took effect only on March 23, and the earnings report covers only activity for the three months ending March 30 — meaning these numbers are just covering a week of activity. In other words, the effect over the longer term will likely be significant.
The TweetDeck stat covering 90 percent fewer users using TweetDeck to create false information and automated engagement spam are both a result of changes to TweetDeck itself as well as a new and more proactive approach that Twitter is taking.
In February, Twitter stopped allowing automating mass retweeting — or TweetDecking, as it’s been called by some — in which power users turned to TweetDeck to retweet posts across masses of accounts they managed, as well as across smaller user groups of people who managed masses of accounts, a technique that helps a tweet go viral. Some weeks later it moved to suspend a number of accounts that were guilty of the practice, although at least some of those suspensions were strongly disputed by the owners as mistakes on the part of the company.
Policies and enforcement around the company’s API have also been tightened up. The 142,000 applications that are no longer connected to the API were responsible for no less than 130 million “low-quality tweets.” It’s a sizeable volume on its own, but — given the Twitter model — it’s even more impactful since they spurred a number of interactions and retweets outside those spam accounts, perpetuated by individuals. As with TweetDeck, the API changes were part of the larger overhaul Twitter made around automation and multiple accounts.
It’s an interesting turn for the company: Given that the mass-action tweeting ability has been so hugely misused, it’s a wonder why Twitter ever allowed it in the first place. It may have been one of those badly conceived moments where Twitter thought it would help with traffic and activity on the site at a time when it needed to demonstrate growth, and perhaps just to bring more activity to the platform when it was smaller.
Beyond its own desire to be a force for good and not abuse, it’s also something that Twitter has been somewhat forced to address. Social media sites like Twitter and Facebook have proven to have a huge role in helping to disseminate information, but that spotlight has taken on a particularly pernicious hue in recent times. The rise of fake news and what role that might have played in the outcome of the EU referendum in the U.K. and the most recent presidential election in the U.S.; and extreme cases of harassment online, are two of the uglier examples of where social sites might have an obligation to play a stronger role beyond that of simply being a conduit for information. With governments now also looking into the issue, Twitter taking better control of this is an important step, and perhaps one it would rather control itself.
In any case, this appears to be just the start of how Twitter hopes to raise the tone, and generally make its platform a safer and nicer place to be. “Our systems continue to identify and challenge millions of suspicious accounts globally per week as a result of our sustained investments in improving information quality on Twitter,” the company notes.
There are also some interesting plans in the pipeline. The company has been on a “health” kick of late, and has been looking to crowdsource suggestions for how to improve trust and safety, and reduce abuse and spam, on the platform. An RFP that it issued to stakeholders — and anyone interested in helping — has so far yielded 230 responses from “global institutions,” the company said. “We expect to have meaningful updates in the second quarter, and we’re committed to continuing to share our progress along the way.”
We are listening to the earnings webcast and will update with more related to this as we hear it.
In what can only be described as a stoner frat bros dream come true, two trucks carrying beer and chips collided in an accident that shut down parts of I-95 in Florida.
As youll see in the photos, there were piles of crushed beer cans and bags of chips scattered on the side of the road. All we needed was some Top 40 bangers playing at top volume and it would have been a party.
And, as one would expect, there are some great jokes on Twitter about the seemingly predestined accident.
Fortunately, neither driver was severely injured (although the driver of the Busch Beer truck was ticketed for failing to stay in a lane), and no one was desperate enough to scrounge up the undamaged beer cans and chips. So, overall, a decent day in Florida.
Photo via @ValleyNewsLive/Twitter
Today on July 11, 2017 Donald Trump Jr. shellshocked a whole lotta people when he released his own emails — practically pulling the trigger to his own head, and providing irrefutable proof that there was some collusion between the Trump campaign and Russia.
Read more: cheezburger.com
I think someone is hate-retweeting me. She has 25 K followers! Should I call her out?
Easy. Couldn’t be easier. Hate-favoriting and hate-retweeting is childish behavior. So if you want to be bold, by all means call her out. And if you want to be less bold but perhaps guys more efficient, only block her: Game over.
And yet, can I be honest? This may be the most subtly amazing topic I’ve ever had to pretend to be a know-it-all about. Because if I push merely a bit on your premise, it all goes soft. I can see ancillary dilemma, qualifications, and niggling unknowns pile up until the kind of clear, objective truth I’m required to find get hopelessly boxed in. There’s a lot here to pick apart. Let’s start with the corrosive, discombobulating nature of spite.
Ever heard of the Spite Fence? Go back to 1876. San Francisco’s Big Fourthe four main bazillionaire railroad baronsall decided to build mansions on a scenic, empty hilltop: Nob Hill. At least, it was mostly empty. Bounded within the large property purchased by one of these magnates, Charles Crocker, was a little house on a small, separate parcel owned by an undertaker named Nicholas Yung. Crocker wanted Yung gone; Yung wouldn’t sell. Crocker, bewildered that his fund hadn’t made this inconvenience go forth, maintained making offers. Yung maintained declining. So Crockerovercome with spitestarted a flame war. Or a wall war.
Crocker constructed his manor. Then he built a 30 -foot-high wall on his land that effectively surrounded Yung’s property. It shut out the sunlight. It shut Yung in. It was ridiculous appearing, and people came from all over to gawk at it. There was a kind of class warfare brewing in the city at the time, and one activist pamphlet singled out Crocker’s fence as a very obnoxious symbol of the domineering spirit of the wealthy. The San Francisco Chronicle called the Spite Fence an inartistic monument of bitternes and a commemoration of malignity and malevolence. Yet Yungthe simple mortician, just wanting to live their own lives, in his housedidn’t sell. The mortician was himself essentially buried, though still aboveground. But he just took it, took the high road, and let that towering manifestation of Crocker’s out-of-control id speak for itself. Yung never even retaliated, though he thought about it. His wife said, There are some things to which people like ourselves do not care to stoop.
You must feel like Nicholas Yung: tweeting through their own lives in a pure, happy-go-lucky route, only to ensure a wall of spite building up in this other person’s timeline, one hateful retweet at a time, to rebuke you. And like I told at the outset: How nasty that is; how immature. But why do you think these likes and retweets are hate-likes and hate-retweets, as opposed to supportive likes and supportive retweets? What would result you to this conclusion? I can’t help but wonder if there’s something you’re not telling meif you yourself fret there’s an arrogant, airheaded, obnoxious, or self-congratulatory tone to what you’re tweeting, the sort of stance that typically elicits that kind of resentment online. Are you, for example, relentlessly issuing tidbits like So lucky my newborn sleeps for 12 hours each night !!!!!! Almost enough time for tantric sexuality with my amazing partner! or Just had lunch with Bon Jovi! #blessed?
I’m not saying you are. I’m just wondering. Honestly. I don’t want to blame the victim. My phase is, the victim of one various kinds of obnoxiousness can be a perpetrator of another. You ought to give that a hard suppose and figure out which side of this Spite Fence you’re actually standing on, before you poke your head over and start shouting.
jQuery( document ). ready( function ($)$( “h3. no-lede span.lede” ). removeClass( “lede” ); ); h3. no-lede font-family: inherit; font-size: inherit; letter-spacing: inherit;
Twitter thinks it has identified your BFF. The company is currently testinga new feature that they are able to highlight the tweets from a select, single account that Twitter thinks youll want to see. Yes: a single persons tweets will get their own special place on your timeline. The feature is similar to Twitters In Case You Missed It which rounds up the tweets from thoseaccounts you more regularly be participating in, or others Twitter thinks you might like.
And like In Case You Missed It, you can dismiss thisnew BFF module when it appears. This will indicate to Twitter that you want to see this feature less often.
Twitter confirmed the test is underway for select users on iOS, Android, and the web.
The account it chooses to show you is based on a number of signals like how often you engage with the account in question. Repeat engagement is also used to determine whether or not Twitter shows you the module at all.
Originally merely a chronologically-ordered feed of information, Twitterhas taken steps over the years to make its service more approachable, and its always testing out new ways to boost tweets, likes, and retweets.
As a part of these efforts, Twitter has tried to distance itself from the chronological timeline, to one thats more algorithmically ascertained. The companyhas not gone as far as Facebook in completely re-ordering the content it displays. Instead, it pushes tweets it thinks you wouldnt want to miss up to the top of the screen, to be shown when you return to its app after being away.
This is where youll find the new module, as well, if youve been opted in.
Its unclear how well Twitter has correctly figured out whose tweets you want to see the most, however.
But those who are in the experimentation now seem to find it funny that Twitter is pointing them to the tweets from a single individual.
Jokes one Twitter user, I dont guess I like anybody enough to justify this new feature youre trying.