Technology
Technology Conspiracy Constitutional Rights Domestic Policy Government Police State Top News

This Orwellian Technology Automates Censorship And Is Even Worse Than It Sounds

As people around the world struggle to come to terms with the recent years’ rise in terrorist attacks — and on the heels of a mass shooting in Orlando that left 49 people dead — many are looking to social media platforms to up their game in the battle against online radicalization. Now, a computer scientist claims to have developed an algorithm that can permanently remove extremist content from the Web — before it has a chance to go viral.

“It is no longer a matter of not having the technological ability to fight online extremism,” Dr. Hany Farid, of Dartmouth, told International Business Times, “it is a matter of the industry and private sector partners having the will to take action.”

The technology, which Farid describes as “robust hashing,” is based on earlier software the scientist developed called PhotoDNA, which tracks and removes images of child pornography. The new algorithm would function similarly — identifying extremist images and videos on the Internet by searching for that contents’ unique digital markers, or “hashes.”

“Every image, every video, every audio, has a distinct signature that we can extract from it. It’s a lot like human DNA,” Dr. Farid said on MSNBC. “So that when an image comes in, we have flagged it as child pornography, extremism, violence, calls to violence. So we extract the signature, and then we simply scan everything that comes in and compare it against that signature, and when we get a hit, that content is not allowed online.”



Funded by Microsoft and in collaboration with the nonprofit think tank Counter Extremism Project, Farid is now calling on social media platforms like Facebook and Twitter to adopt his technology, which he claims will be ready to roll out in a matter of months.

For its part, the United States government — which has long called upon tech companies to aid authorities in combating the radicalization of ordinary citizens — is fully on board with Farid’s approach.

“We welcome the launch of initiatives…that enable companies to address terrorist activity on their platforms and better respond to the threat posed by terrorists’ activities online,” Lisa Monaco, President Obama’s top counterterrorism expert, was reported as saying. “The innovative private sector that created so many technologies our society enjoys today can also help create tools to limit terrorists from abusing these technologies in ways their creators never intended.”

But many, including Dr. Farid himself, have voiced concern about the seemingly arbitrary nature in which such an algorithm tracks content, since there appears to be nothing standing in the way of programmers using the technology to target things like political speech that might happen to be unfavorable to the establishment.

Foreign Policy magazine, for instance, notes that “if the project ever hopes to get off the ground it will have to overcome serious concern that using algorithms to police speech doesn’t end up as Orwellian as it sounds.”

Continuing:

“If defining what constitutes a terrorist is a famously tricky problem, nailing down what counts as terrorist rhetoric is doubly hard. Farid himself acknowledges that his algorithm could be turned toward nefarious ends. ‘You could also envision repressive regimes using this to stifle speech,’ he said.”

Dr. Farid echoed this sentiment while talking with the Washington Post, admitting that his technology is a “double-edged sword.”

“Those are where the hard questions are going to be asked,” he said. “What constitutes and does not constitute hate speech and calls to violence? And what is dangerous, and what is simply dissent?”

Matthew Prince, chief executive of content distribution at CloudFlare, spoke to The Guardian about the fact that, so far, none of social media companies have agreed to implement Farid’s algorithm — and are holding all their discussions about the prospect away from the public eye.

“There’s no upside in these companies talking about it,” he told the publication. “Why would they brag about censorship?”



Others have questioned the underlying concept behind the software. Nicholas Glavin, senior research associate at U.S. Naval War College, told Vocativ:

This new technology is far from the silver bullet that will tackle extremism online. Focusing on the supply side of extremist content fails to address the push and pull factors that drive individuals to it in the first place.

Doubts about the software’s potential efficacy aside, the primary question at the moment is in regard to who gets to determine what qualifies as extremist content. Currently, that responsibility would fall to the Counter Extremism Project (CEP), the think tank Dr. Farid is collaborating with — and of which the scientist is a senior advisor.

The CEP is proposing a new center called the National Office for Reporting Extremism (NORex), which would house the database of flagged content the algorithm would draw from.

As such, and as Defense One writes: “CEP would play an important role in deciding whether or not tagged content was actually extremist in nature, or simply controversial.”

With such discretionary power centralized in one body, prudence demands a closer look at who’s holding that organization’s reins.

And the short answer, as perhaps should come as no surprise, is government.

A simple scan over the CEP “Leadership” page — what The Atlantic refers to as a “star-studded roster” — would no doubt cause some who stay abreast of geopolitical affairs to step back and consider.

The first advisory board member listed is former Connecticut senator Joseph Lieberman, whose run in Congress lasted nearly a quarter-century. When his service ended in 2013, he was Chairman of the Homeland Security and Governmental Affairs Committee and a senior member of the Armed Services Committee.

Another CEP advisor is Dennis Ross, counselor at the Washington Institute for Near East Policy and former director at the National Security Council. Prior to the Institute, Ross served two years as special assistant to President Barack Obama and a year as special advisor to then-Secretary of State Hillary Clinton.

Elliot Abrams, former deputy assistant and deputy national security advisor to President George W. Bush, is now a senior fellow at the Council on Foreign Relations. Years before, Abrams was an assistant secretary of state in the Reagan administration.

Other advisory members include former ambassadors, current intelligence professionals, and a Nobel Laureate.

The list goes on.



Even the CEP president, Frances Townsend — who now works as an attorney in the private sector — spent 13 years in the Justice Department under the administrations of George H.W. Bush, Bill Clinton, and George W. Bush.

So for those who would turn to government institutions to show us the way with regard to such complicated issues as monitoring speech, having such potentially sweeping censorship power in the hands of former government officials is probably cause to rejoice.

For others, however, who view those same institutions as an impediment to truly free and open discussion of ideas — and who can envision the dark place the type of technology being proposed by Dr. Farid and the CEP could lead, regarding First Amendment rights — what’s unfolding now is considerable cause for alarm indeed.

Source: http://www.activistpost.com/

One Reply to “This Orwellian Technology Automates Censorship And Is Even Worse Than It Sounds

  1. daily more onerous surveillance, propaganda and lies !
    this is like living a horrible acid trip…..!

Leave a Reply

Your email address will not be published. Required fields are marked *