Category Archives: Government

Copyright compromise: Music Modernization Act signed into law

Musicians are celebrating as the Music Modernization Act, an attempt to drag copyright and royalty rules into the 21st century, is signed into law after unanimous passage through Congress. The act aims to centralize and simplify the process by which artists are tracked and paid on digital services like Spotify and Pandora, and also extends the royalty treatment to songs recorded before 1972.

The problems in this space have affected pretty much every party. Copyright law and music industry practices were, as you might remember, totally unprepared for the music piracy wave at the turn of the century, and also for the shift to streaming over the last few years. Predictably, it isn’t the labels, distributors or new services that got hosed — it’s artists, who often saw comically small royalty payments from streams if they saw anything at all.

Even so, the MMA has enjoyed rather across-the-board support from all parties, because existing law is so obscure and inadequate. And it will remain that way to a certain extent — this isn’t layman territory and things will remain obscure. But the act will address some of the glaring issues current in the media landscape.

The biggest change is probably the creation of the Mechanical Licensing Collective. This new organization centralizes the bookkeeping and royalty payment process, replacing a patchwork of agreements that required lots of paperwork from all sides (and as usual, artists were often the ones left out in the cold as a result). The MLC will be funded by companies like Pandora or Google that want to enter into digital licensing agreements, meaning there will be no additional commission or fee for the MLC, but the entity will actually be run by music creators and publishers.

Previously digital services and music publishers would enter into separately negotiated agreements, a complex and costly process if you want to offer a comprehensive library of music — one that stifled new entrants to the market. Nothing in the new law prevents companies from making these agreements now, as some companies will surely prefer to do, but the MLC offers a simple, straightforward solution and also a blanket license option where you can just pay for all the music in its registry. This could in theory nurture new services that can’t spare the cash for the hundred lawyers required for other methods.

There’s one other benefit to using the MLC: you’re shielded from liability for statutory damages. Assuming a company uses it correctly and pays their dues, they’re no longer vulnerable to lawsuits that allege underpayment or other shenanigans — the kind of thing streaming providers have been weathering in the courts for years, with potentially massive settlements.

The law also improves payouts for producers and engineers, who have historically been under-recognized and certainly under-compensated for their roles in music creation. Writers and performers are critical, of course, but they’re not the only components to a great song or album, and it’s important to recognize this formally.

The last component of the MMA, the CLASSICS Act, is its most controversial, though even its critics seem to admit that it’s better than what we had before. CLASSICS essentially extends standard copyright rules to works created before 1972, during which year copyright law changed considerably and left pre-1972 works largely out of the bargain.

What’s the problem? Well, it turns out that many works that would otherwise enter the public domain would be copyright-protected (or something like it — there are some technical differences) until 2067, giving them an abnormally long term of protection. And what’s more, these works would be put under this new protection automatically, with no need for the artists to register them. That may sound convenient, but it also means that thousands of old works would be essentially copyrighted even though their creators, if they’re even alive, have asserted no intention of seeking that status.

A simple registry for those works was proposed by a group of data freedom advocates, but their cries were not heard by those crafting and re-crafting the law. Admittedly it’s something of an idealistic objection, and the harm to users is largely theoretical. The bill proceeded more or less as written.

At all events the Music Modernization Act is now law; its unanimous passage is something of an achievement these days, though God knows both sides need as many wins as they can get.

FCC has a redaction party with emails relating to mystery attack on comment system

You may remember the FCC explaining that in both 2014 and 2017, its comment system was briefly taken down by a denial of service attack. At least, so it says — but newly released emails show that the 2014 case was essentially fabricated, and the agency has so aggressively redacted documents relating to the 2017 incident that one suspects they’re hiding more than ordinary privileged information.

As a very quick recap: Shortly after the comment period opened for both net neutrality and the rollback of net neutrality there was a rush of activity that rendered the filing system unusable for a period of hours. This was corrected soon afterwards and the capacity of the system increased to cope with the increased traffic.

A report from Gizmodo based on more than 1,300 pages of emails obtained by watchdog group American Oversight shows that David Bray, the FCC’s chief information officer for a period encompassing both events, appears to have advanced the DDoS narrative with no real evidence or official support.

The 2014 event was not called an attack until much later, when Bray told reporters following the 2017 event that it was. “At the time the Chairman [i.e. Tom Wheeler] did not want to say there was a DDoS attack out of concern of copycats,” Bray wrote to a reporter at Federal News Radio. “So we accepted the punches that it somehow crashed because of volume even though actual comment volume wasn’t an issue.”

Gigi Sohn, who was Wheeler’s counsel at the time, put down this idea: “That’s just flat out false,” she told Gizmodo. “We didn’t want to say it because Bray had no hard proof that it was a DDoS attack. Just like the second time.”

And it is the second time that is most suspicious. Differing on the preferred nomenclature for a four-year-old suspicious cyber event would not be particularly damning, but Bray’s narrative of a DDoS is hard to justify with the facts we do know.

In a blog post written in response to the report, Bray explained regarding the 2017 outage:

Whether the correct phrase is denial of service or “bot swarm” or “something hammering the Application Programming Interface” (API) of the commenting system — the fact is something odd was happening in May 2017.

Bray’s analysis appears sincere, but the data he volunteers is highly circumstantial: large amounts of API requests that don’t match comment counts, for instance, or bunches of RSS requests that tie up the servers. Could it have been a malicious actor doing this? It’s possible. Could it have been bad code hammering the servers with repeated or malformed requests? Also totally possible. The FCC’s justification for calling it an attack seems to be nothing more than a hunch.

Later the FCC, via then-CIO Bray, would categorize the event as a “non-traditional DDoS attack” flooding the API interface. But beyond that it has produced so little information of any import that Congress has had to re-issue its questions in stronger words.

No official documentation of either supposed attack has appeared, nor has the FCC released any data on it, even a year later and long after the comment period has closed, improvements to the system have been made and the CIO who evaded senators’ questions departed.

But most suspicious is the extent to which the FCC redacted documents relating to the 2017 event. Having read through the trove of emails, Gizmodo concludes that “every internal conversation about the 2017 incident between FCC employees” has been redacted. Every one!

The FCC stated before that the “ongoing nature” of the threats to its systems meant it would “undermine our system’s security” to provide any details on the improvements it had made to mitigate future attacks. And Bray wrote in his post that there was no “full blown report” because the team was focused on getting the system up and running again. But there is also an FCC statement saying that “our analysis reveals” that a DDoS was the cause.

What analysis? If it’s not a “significant cyber incident,” as the FBI determined, why the secrecy? If there’s no report or significant analysis from the day — wrong or right in retrospect — what is sensitive about the emails that they have to be redacted en masse? Bray himself wrote more technical details into his post than the FCC has offered in the year since the event — was this information sent to reporters at the time? Was it redacted? Why? So little about this whole information play makes no sense.

One reasonable explanation (and just speculation, I should add) would be that the data do not support the idea of an attack, and internal discussions are an unflattering portrait of an agency doing spin work. The commitment to transparency that FCC Chairman Pai so frequently invokes is conspicuously absent in this specific case, and one has to wonder why.

The ongoing refusal to officially document or discuss what all seem to agree was an important event, whether it’s a DDoS or something else, is making the FCC look bad to just about everyone. No amount of redaction can change that.

Washington sues Facebook and Google over failure to disclose political ad spending

Facebook and Google were paid millions for political advertising purposes in Washington but failed for years to publish related information — such as the advertiser’s address — as required by state law, alleges a lawsuit by the state’s attorney general.

Washington law requires that “political campaign and lobbying contributions and expenditures be fully disclosed to the public and that secrecy is to be avoided.”

Specifically, “documents and books of account” must be made available for public inspection during the campaign and for three years following; these must detail the candidate, name of advertiser, address, cost and method of payment, and description services rendered.

Bob Ferguson, Washington’s attorney general, filed a lawsuit yesterday alleging that both Facebook and Google “failed to obtain and maintain” this information. Earlier this year, Eli Sanders of Seattle’s esteemed biweekly paper The Stranger requested to view the “books of account” from both companies, and another person followed up with an in-person visit; both received unsatisfactory results.

They alerted the AG’s office to these investigations in mid-April, and here we are a month and a half later with a pair of remarkably concise lawsuits. (This appears to be separate from the Seattle Election Commission’s allegations of similar failings by Facebook in February.)

All told Facebook took in about $3.4 million over the last decade, including “$2.5 million paid through political consultants and other agents or intermediaries, and $619,861 paid directly to Facebook.” Google received about $1.5 million over the same period, almost none of which was paid directly to the company. (I’ve asked the AG’s office for more information on how these amounts are defined.)

The total yearly amounts listed in the lawsuits may be interesting to anyone curious about the scale of political payments to online platforms at the state scale, so I’m reproducing them here.

Facebook

  • 2013: $129,099
  • 2014: $310,165
  • 2015: $147,689
  • 2016: $1,153,688
  • 2017: $857,893

Google

  • 2013: $47,431
  • 2014: $72,803
  • 2015: $56,639
  • 2016: $310,175
  • 2017: $295,473

(Note that these don’t add up to the totals mentioned above; these are the numbers filed with the state’s Public Disclosure Committee. 2018 amounts are listed but are necessarily incomplete, so I omitted them.)

At least some of the many payments making up these results are not properly documented, and from the looks of it, this could amount to willful negligence. If a company is operating in a state and taking millions for political ads, it really can’t be unaware of that state’s disclosure laws. Yet according to the lawsuits, even basic data like names and addresses of advertisers and the amounts paid were not collected systematically, let alone made available publicly.

It’s impossible to characterize flouting the law in such a way as an innocent mistake, and certainly not when the mistake is repeated year after year. This isn’t an academic question: if the companies are found to have intentionally violated the law, the lawsuit asks that damages be tripled (technically, “trebled.”)

Neither company addressed the claims of the lawsuit directly when contacted for comment.

Facebook said in a statement that “Attorney General Ferguson has raised important questions and we look forward to resolving this matter with his office quickly.” The company also noted that it has taken several steps to improve transparency in political spending, such as its planned political ad archive and an API for requesting this type of data.

Google said only that it is “currently reviewing the complaint and will be engaging with the Attorney General’s office” and asserted that it is “committed” to transparency and disclosure, although evidently not in the manner Washington requires.

The case likely will not result in significant monetary penalties for the companies in question; even if fines and damages totaled tens of millions it would be a drop in the bucket for the tech giants. But deliberately skirting laws governing political spending and public disclosure is rather a bad look for companies under especial scrutiny for systematic dishonesty — primarily Facebook.

If the AG’s suit goes forward and the companies are found to have intentionally avoided doing what the law required, they (and others like them) would be under serious pressure to do so in the future, not just in Washington, but in other states where similar negligence may have taken place. AG Ferguson seems clearly to want to set a precedent and perhaps inspire others to take action.

I’ve asked the AG’s office for some clarifications and additional info, and will update this post if I hear back.

Google reportedly backing out of military contract after public backlash

A controversial Google contract with the U.S. military will not be renewed next year after internal and public outcry against it, Gizmodo reports. The program itself was not particularly distasteful or lucrative, but served as a foot in the door for the company to pursue more government work that may very well have been both.

Project Maven, as the program was known, essentially had Google working with the military to perform image analysis on sensitive footage like that from drones flying over conflict areas.

A small but vocal group of employees has repeatedly called the company out for violating its familiar (but now deprecated) “Don’t be evil” motto by essentially taking a direct part in warfare. Thousands of employees signed a petition to end the work, and several even resigned in protest.

But more damaging than the loss of a few squeaky wheels has been the overall optics for Google. When it represented the contract as minor, and that it was essentially aiding in the administration of open-source software, the obvious question from the public was “so why not stop?”

The obvious answer is that it isn’t minor, and that there’s more to it than just a bit of innocuous support work. In fact, as reportage over the last few months has revealed, Maven seems to have been something like a pilot project intended to act as a wedge by which to gain access to other government contracts.

Part of the goal was getting the company’s security clearance fast-tracked and thus gaining access to data by which it could improve its military-related offerings. And promises to Pentagon representatives detailed far more than facilitation of garden-variety AI work.

Gizmodo’s sources say that Diane Greene, CEO of Google Cloud, told employees today at a meeting that the backlash was too much and that the company’s priorities as regards military work have changed. They must have changed recently, since discussions have been ongoing right up until the end of 2017. I’ve asked Google for comment on the issue.

Whether the expiration of Project Maven will represent a larger change to Google’s military and government ambitions remains to be seen; some managers are surely saying to themselves right now that it would be a shame to have that security clearance go to waste.

SPACE Administration would streamline federal oversight of commercial launches

As part of an ongoing effort to improve the regulatory conditions weathered by companies doing business in space, the Commerce Department has proposed to unify several offices under a new banner: the Space Policy Advancing Commercial Enterprise Administration.

The Trump administration offered hints, but few hard details, on how it aims to streamline federal oversight of space in a statement issued this week. Space Policy Directive 1 had to do with pursuing missions to the moon and Mars, and Directive 2 is more about housekeeping.

Part of that housekeeping directs Secretary of Commerce Wilbur Ross Jr to “transmit a plan to create a ‘one-stop shop’ within the Department of Commerce for administering and regulating commercial space flight activities,” and he seems to have been eager to comply.

“At my department alone, there are six bureaus involved in the space industry. A unified departmental office for business needs will enable better coordination of space-related activities,” Ross wrote. “When companies seek guidance on launching satellites, the Space Administration will be able to address an array of space activities, including remote sensing, economic development, data-purchase policies, GPS, spectrum policy, trade promotion, standards and technology and space-traffic management.”

Some of these changes have been talked about for a while, so this shouldn’t come as a shock to the offices affected. In fact, they may be pleased to hear it. Space regulation is a mire of interdepartmental memos and red tape, and U.S. leadership in the launch and satellite industry has arguably been in spite of it, not because of it.

Unifying a few offices is a start, but it will take more than administrative shuffling to clear out the regulatory cobwebs. This new administration alone will need to be permanently established by Congress, funded, and oversight assigned. And the work of synchronizing, deduplicating, and otherwise improving our space policy across all the various branches of government will be the work of many years, not a season.

Vermont passes first first law to crack down on data brokers

While Facebook and Cambridge Analytica are hogging the spotlight, data brokers that collect your information from hundreds of sources and sell it wholesale are laughing all the way to the bank. But they’re not laughing in Vermont, where a first-of-its-kind law hems in these dangerous data mongers and gives the state’s citizens much-needed protections.

Data brokers in Vermont will now have to register as such with the state; they must take standard security measures and notify authorities of security breaches (no, they weren’t before); and using their data for criminal purposes like fraud is now its own actionable offense.

If you’re not familiar with data brokers, well, that’s the idea. These companies don’t really have a consumer-facing side, instead opting to collect information on people from as many sources as possible, buying and selling it amongst themselves like the commodity it has become.

This data exists in a regulatory near-vacuum. As long as they step carefully, data brokers can maintain what amounts to a shadow profile on consumers. I talked with director of the World Privacy Forum, Pam Dixon, about this practice.

“If you use an actual credit score, it’s regulated under the Fair Credit Reporting Act,” she told me. “But if you take a thousand points like shopping habits, zip code, housing status, you can create a new credit score; you can use that and it’s not discrimination.”

And while medical data like blood tests are protected from snooping, it’s not against the law for a company to make an educated guess your condition from the medicine you pay for at the local pharmacy. Now you’re on a secret list of “inferred” diabetics, and that data gets sold to, for example, Facebook, which combines it with its own metrics and allows advertisers to target it.

Oh yes, Facebook does that. Or did do it for years, only ending the practice under the present scrutiny. “When you looked at Facebook’s targeting there were like 90 targets – race, income, housing status — that was all Acxiom data,” Dixon told me; Acxiom is one of the largest brokers.

Data brokers have been quietly supplying everyone with your personal information for a long time. And advertising is the least of its applications: this data is used for informing shadow credit scores, restricting services and offers to certain classes of people, setting terms of loans, and more.

Vermont’s new law, which took effect late last week, is the nation’s first to address the data broker problem directly.

“It’s been a huge oversight,” said Dixon. “Until Vermont passed this law there was no regulation for data brokers. It’s that serious. We’ve been looking for something like this to be put in place for like 20 years.”

Europe, meanwhile, has leapfrogged American regulators with the monumental GDPR, which just entered into effect.

The issue, she said, has always been defining a data broker. It’s harder than you might think, considering how secretive and influential these companies are. When every company collects data on their customers and occasionally monetizes it, who’s to say where an ordinary business ends and data brokering begins?

They fought previous laws, and they fought this one. But Dixon, who along with the companies themselves was part of the state’s hearings to create the law, said Vermont avoided this pitfall.

“The way the bill is written is extremely well thought through. They didn’t worry as much about the definition, but focused on the activity,” she explained. And indeed the directness and clarity of the law are a pleasant surprise:

While data brokers offer many benefits, there are also risks associated with the widespread aggregation and sale of data about consumers, including risks related to consumers’ ability to know and control information held and sold about them and risks arising from the unauthorized or harmful acquisition and use of consumer information.

Consumers may not be aware that data brokers exist, who the companies are, or what information they collect, and may not be aware of available recourse.

This straightforward description of a subtle and widespread problem greatly enabled by technology is a rarity in a world dominated by legislators and judges who regularly demonstrate ignorance on high-tech topics. (You can read the full law here.)

As Dixon pointed out, lots of companies will find themselves encompassed by the law’s broad definition:

“Data broker” means a business, or unit or units of a business, separately or together, that knowingly collects and sells or licenses to third parties the brokered personal information of a consumer with whom the business does not have a direct relationship.

In other words, anyone who collects data second hand and resells it. There are a few exceptions for things like consumer-focused information services (411, for example) but it seems unlikely that any of the real brokers will escape the designation.

With the requirement to register, along with a few other disclosures brokers will be required to make, consumers will be aware of which they can opt out of and how. And if they find themselves the victim of a crime that used broker data — a home loan rate secretly raised because of race, for instance, or a job offer rescinded because of a surreptitiously discovered medical condition — they have legal recourse.

Security at these companies will have to meet a minimum standard, as well as access controls. And data breach rules mean prompt notification if personal data is leaked in spite of them.

It’s a good first step and one that should prove extremely beneficial to Vermonters; if it’s as successful as Dixon thinks it is, other states may soon imitate it.

Twitter will give political candidates a special badge during U.S. midterm elections

Ahead of 2018 U.S. midterm elections, Twitter is taking a visible step to combat the spread of misinformation on its famously chaotic platform. In a blog post this week, the company explained how it would be adding “election labels” to the profiles of candidates running for political office.

“Twitter has become the first place voters go to seek accurate information, resources, and breaking news from journalists, political candidates, and elected officials,” the company wrote in its announcement. “We understand the significance of this responsibility and our teams are building new ways for people who use Twitter to identify original sources and authentic information.”

These labels feature a small government building icon and text identifying the position a candidate is running for and the state or district where the race is taking place. The label information included in the profile will also appear elsewhere on Twitter, even when tweets are embedded off-site.

The labels will start popping up after May 30 and will apply to candidates in state governor races as well as those campaigning for a seat in the Senate or the House of Representatives.

Twitter will partner with nonpartisan political non-profit Ballotpedia to create the candidate labels. In a statement announcing its partnership, Ballotpedia explains how that process will work:

“Ballotpedia covers all candidates in every upcoming election occurring within the 100 most-populated cities in the U.S., plus all federal and statewide elections, including ballot measures. After each state primary, Ballotpedia will provide Twitter with information on gubernatorial and Congressional candidates who will appear on the November ballot. After receiving consent from each candidate, Twitter will apply the labels to each candidate profile.”

The decision to create a dedicated process to verify political profiles is a step in the right direction for Twitter. With major social platforms still in upheaval over revelations around foreign misinformation campaigns during the 2016 U.S. presidential election, Twitter and Facebook need to take decisive action now if they intend to inoculate their users against a repeat threat in 2018.

US news sites are ghosting European readers on GDPR deadline

A cluster of U.S. news websites has gone dark for readers in Europe as the EU’s new privacy laws went into effect on Friday. The ruleset, known as General Data Protection Regulation (GDPR), outlines a robust set of requirements that internet companies collecting any personal data on consumers must follow. The consequences are considerable enough that the American media company Tronc decided to block all European readers from its sites rather than risk the ramifications of its apparent noncompliance.

Tronc -owned sites affected by the EU blackout include the Los Angeles Times, The Chicago Tribune, The New York Daily News, The Orlando Sentinel and The Baltimore Sun. Some newspapers owned by Lee Enterprises also blocked European readers, including The St. Louis Post Dispatch and The Arizona Daily Star.

While Tronc deemed its European readership disposable, at least in the short-term, most major national U.S. outlets took a different approach, serving a cleaned-up version of their website or asking users for opt-in consent to use their data. NPR even pointed delighted users toward a plaintext version of their site.

While many of the regional papers that blinked offline for EU users predominantly serve U.S. markets, some are prominent enough to attract an international readership, prompting European users left out in the cold to openly criticize the approach.

Those criticisms are well-deserved. The privacy regulations that GDPR sets in place were first adopted in April 2016, meaning that companies had two years to form a compliance plan before the regulations actually went live today.

Facebook and Instagram launch U.S. political ad labeling and archive

Facebook today revealed that it’s chosen not to shut down all political ads because that could unfairly favor incumbents and candidates without resources to buy pricey TV ads. Instead, it’s now launching its previously announced “paid for by” labels on political and issue ads on Facebook and Instagram in the US, and its publicly searchable archive of all these politics-related ads that run in the US. That includes ads run by news publishers or others that promote articles with political content.

The labeling won’t just apply to candidate and election ads, but those dealing with political issues such as “abortion, guns, immigration or foreign policy”. Clicking through the labels that appear at the top of these News Feed ads will lead to the archive, which isn’t backdated and will only include ads from early May 2018 and after. The archive will hold them for seven years so they can be searched by keyword or Page who ran them. It will also display the ad’s budget, the and the number of people who saw it, plus aggregated, anonymized data on their age, gender, and location.

A look at ads run by Donald Trump’s official page inside Facebook’s new political ad archive

Any advertiser that wants to run political ads must now go through Facebook’s authorization process that requires them to reveal their identity and location, and advertisers will only have a week’s grace period starting today before those unauthorized will have their ads paused. Facebook plans to monitor political ads with a combination of artificial intelligence and 3000 to 4000 newly-hired ad reviewers as part of its doubling of its security team from 10,000 to 20,000 this year.

An example of a “Paid for by” label on an Instagram ad

They reviewers and AI will analyze these ads’ images, text, and the outside websites they point to look for political content. They’ll seek to avoid bias in classification by following guidelines on what constitutes one of 20 political issues from the decades-running Comparative Agendas Project. Users may also report unlabeled ads, which will then be reviewed, paused, and archived if they’re deemed political. Their buyer will then be required to go through the authorization process before they can buy more.

Facebook plans to provide a database available via a forthcoming API that will let watchdog groups, academics, and researchers review how ads are being used to influence elections. These tools will open to other countries in the following months, and Facebook plans to make all ads visible to everyone through a tool launching in June that’s now testing in Ireland and Canada.

Facebook’s chief product officer Chris Cox writes that “We hope that in aggregate these changes will be a big step to improve the quality of civic engagement in our products, and to keep the public discourse strong.”

Facebook held a conference call to discuss the launch with reporters this morning. Unfortunately it was timed to end just 15 minutes before the news went out, limiting the ability of journalists to write timely, in-depth coverage. You can listen to that call below:

Concerns With Facebook’s Push For Ad Transparency

While the labels and archive are good step towards transparency, there are still a number of problems with the program. Most specifically, while the political action committees and organizations that often fund political ads can have confusing or misleading names that obscure their true purpose. Simply listing those organizations in the Paid For By labels or archive won’t necessarily give users a lot of information about who the people behind the money are unless they’re willing to go digging across internet themselves.

For example, the notorious conservative political donors the Koch brothers funnel cash through a PAC called Prosperity Action to fund republican candidates like Paul Ryan. Seeing an ad was paid for by Prosperity Action wouldn’t immediately inform most Americans. On the other side, ads to displace Paul Ryan have been bought by a Page called Stand Up America, which many might not immediately know is an anti-Trump group. If Facebook wants to truly give citizens a better understanding of where these political ads come from, it needs to add more info about the donors and political leanings behind PACs and other big spenders.

Another issue is who will have access to the archive API, since the Cambridge Analytica scandal all started with an academic researcher accessing Facebook data.

“We won’t always get it right. We know we’ll miss some ads and in other cases we’ll identify some we shouldn’t” write Facebook’s Global Politics and Government Outreach Director Katie Harbath and Director of Public Policy Steve Satterfield. But Harbath described on the call how even though all the monitoring of political ads will cost more than the revenue the company earns from them, Facebook felt it necessary to “make sure people have a way to express themselves and engage in political discourse in a transparent way.”

Ads With Political Content

Posted by Facebook on Thursday, May 24, 2018

These are the exact kind of tools and labels Facebook should have offered as soon as it began touting its ability to influence elections with its ads over a half decade ago. Better late than never, though.

FBI reportedly overestimated inaccessible encrypted phones by thousands

The FBI seems to have been caught fibbing again on the topic of encrypted phones. FBI director Christopher Wray estimated in December that it had almost 7,800 phones from 2017 alone that investigators were unable to access. The real number is likely less than a quarter of that, the Washington Post reports.

Internal records cited by sources put the actual number of encrypted phones at perhaps 1,200 but perhaps as many as 2,000, and the FBI told the paper in a statement that “initial assessment is that programming errors resulted in significant over-counting of mobile devices reported.” Supposedly having three databases tracking the phones led to devices being counted multiple times.

Such a mistake would be so elementary that it’s hard to conceive of how it would be possible. These aren’t court notes, memos, or unimportant random pieces of evidence, they’re physical devices with serial numbers and names attached. The idea that no one thought to check for duplicates before giving a number to the director for testimony in Congress suggests either conspiracy or gross incompetence.

The latter seems more likely after a report by the Office of the Inspector General that found the FBI had failed to utilize its own resources to access locked phones, instead suing Apple and then hastily withdrawing the case when its basis (a locked phone from a terror attack) was removed. It seems to have chosen to downplay or ignore its own capabilities in order to pursue the narrative that widespread encryption is dangerous without a backdoor for law enforcement.

An audit is underway at the Bureau to figure out just how many phones it actually has that it can’t access, and hopefully how this all happened.

It is unmistakably among the FBI’s goals has been to emphasize the problem of devices being fully encrypted and inaccessible to authorities, a trend known as “going dark.” That much it has said publicly, and it is a serious problem for law enforcement. But it seems equally unmistakable that the Bureau is happy to be sloppy, deceptive, or both in its advancement of a tailored narrative.

Share