Show more

Trump Picked Big Oil Over Big Corn—and Now Farmers Are Pissed

As the Trump administration fights political fires raging in Washington, another one is smoldering on the prairie. Farmers in the corn belt, the cluster of states centered on Iowa that produce the great bulk of corn and soybeans, supported Trump overwhelmingly in 2016, helping swing battleground states like Iowa and Wisconsin. But now many of […]

The FISA Oversight Hearing Confirmed That Things Need to Change

Section 215, the controversial law at the heart of the NSA’s massive telephone records surveillance program, is set to expire in December. Last week the House Committee on the Judiciary held anoversight hearing to investigate how the NSA, FBI, and the rest of the intelligence community are using and interpreting 215 and other expiring national security authorities. 

Congress last looked at these laws in 2015 when it passed the USA FREEDOM Act, which sought to end bulk surveillance and to bring much-needed transparency to intelligence agency activities. However, NSA itself has revealed that it has been unable to stay within the limits USA FREEDOM put on Section 215’s “Call Detail Records” (CDR) authority. In response to these revelations, we’ve been calling for an end to the Call Details Records program, as well as additional transparency into the government’s use of Section 215. If last week’s hearing made anything clear, it’s this: there is no good reason for Congress to renew the CDR authority.

The Call Detail Records Program Needs to End

 Chairman Nadler began the hearing by asking Susan Morgan of the NSA if she could point to any specific instance where the CDR program helped to avert any kind of an attack on American soil. Morgan pushed back on the question, telling Chairman Nadler that the value of an intelligence program should not be measured on whether or not it stopped a terrorist attack, and that as an intelligence professional, she wants to make sure the NSA has every tool in the tool box available. 

However, the NSA previously reported it had deleted all the information it received from the 215 program since 2015. Morgan confirmed that part of the reason the NSA chose to mass delete all the records was because not all the information was accurate or allowed under the law. 

 In other words, the NSA wants Congress to renew its authority to run a program that violates privacy protections and collects inaccurate information without providing any way to measure if the program was at all useful. The agency’s best argument for why it wants to renew the legal authorization to use the CDR provision is because it might be useful one day.

 Rep. Steve Cohen asked the panel if they could reassure his “liberal friends” that there have been meaningful reforms to the program. The witnesses cited some of the reforms from USA FREEDOM, passed in 2015, as evidence of post-Snowden reforms and safeguards.

However, their answer did not meaningfully address recent incidents where the NSA discovered that it had improperly collected information. Documents obtained by the ACLU include an assessment by the NSA itself that the overcollection had a “significant impact on civil liberties and privacy,” which is putting it mildly.

Fortunately, the committee did not appear to be convinced by this line of reasoning. As Rep. Sylvia Garcia told Morgan, “If I have a broken hammer in my toolbox, I don’t need to keep it.”

We agree. No surveillance authority should exist purely because it might someday come in handy, particularly one that has already been used for illegal mass surveillance

Other Transparency Issues

In addition to the CDR program, Section 215 also allows the government to collect “business records” or other “tangible things” related to a specific order. Despite the innocuous name, the business records provision allows intelligence agencies to collect a vast range of documents. But we don’t have a sense of just what kinds of sensitive information are collected, and on what scale.

Rep. Pramila Japayal pressed the witnesses on whether Section 215 allows the collection of sensitive information such as medical records, driver’s license photographs, or tax records. Reading from the current law, Brad Wiegmann, Deputy Assistant Attorney General, responded that while the statute does contemplate getting these records, it also recognizes the sensitive nature of those records and requires the requests to be elevated for senior review. 

In other words, the DOJ, FBI and NSA confirmed that under the right circumstances, they believe that the current authority in Section 215 allows the government to collect sensitive records on a showing that they are “relevant” to a national security investigation. Plus, as more and more of our home devices collect information on our daily lives, all the witnesses said they could easily envision circumstances where they would want footage from Amazon’s Ring, which EFF has already argued is a privacy nightmare

In addition, Rep. Hank Johnson and Rep. Andy Biggs pressed the witnesses on whether the government collects geolocation information under Section 215, and if there has been guidance on the impact of the Supreme Court’s landmark Carpenter decision on these activities. Wiegmann acknowledged that while there may be some Fourth Amendment issues, the committee would need to have a classified session to fully answer that question. 

Additionally, when asked about information sharing with other federal agencies, none of the witnesses were able to deny that information collected under Section 215 could be used for immigration enforcement purposes. 

Both of these revelations are concerning. Carpenter brought on a sea change in privacy law and it should be highly concerning to the public and to overseers in Congress that the intelligence community does not appear to be seriously consider its effect on national security surveillance.

As it considers whether or not to renew any of the authorities in Section 215, Congress must also considering what meaningful privacy and civil liberties safeguards to include. Relying on the NSA to delete millions of inaccurate records collected over many years is simply insufficient. 

Secret Laws in Secret Court

In 2015, in the wake of Edward Snowden’s revelations about the NSA mass spying on Americans, Congress passed USA FREEDOM to modify and reform the existing statute. One of the provisions of that bill specifically requires government officials to “conduct a declassification review of each decision, order, or opinion issued” by the FISC “that includes a significant construction or interpretation of any provision of law.”

Both the text of the bill and statements from members of Congress who authored and supported it make clear that the law places new, affirmative obligations on the government to go back, review decades of secret orders and opinions, and make the significant ones public. 

However, the DOJ has argued in litigation with EFF that this language is not retroactive and therefore only requires the government to declassify significant opinions issued after June 2015. 

It also remains unclear how the government determines which opinions are significant or novel enough to be published, as well as how many opinions remain completely secret.

Allowing the Foreign Intelligence Surveillance Court (FISC) to interpret the impact of that decision on Section 215 programs in secret means that the public won’t know if their civil liberties are being violated. 

Releasing all significant FISC opinions, starting from 1978, will not only comply with what Congress required under USA FREEDOM in 2015, it will also help us better understand exactly what the FISC has secretly decided about our civil liberties. Adding a new provision that requires the FISC to detail to Congress how it determines which opinions are significant and how many opinions remain entirely secret would provide additional and clearly needed transparency to the process of administering secret law.

Conclusion 

Despite repeated requests from the members of the panel to describe some way of measuring how effective these surveillance laws are, none of the witnesses could provide a framework. Congress must be able to determine whether any of the programs have real value and if the agencies are respecting the foundational rights to privacy and civil liberties that protect Americans from government overreach. 

Back in March, EFF, along with the ACLU, New America's Open Technology Institute, EPIC and others, sent a letter to the U.S. House Committee on the Judiciary, detailing what additional measures are needed to protect individuals’ rights from abuses under the Patriot Act and other surveillance authorities. Hearing members of the Intelligence Community speak before the Judiciary Committee reconfirmed just how essential it is that these new protections and reforms be enacted.

We look forward to working with the US House Committee on the Judiciary to end the authority for the Call Details Records program once and for all and to ensure that there are real transparency mechanisms in the law to protect civil liberties.

Related Cases:  Jewel v. NSA

Help EFF Find Our Next Development Director

EFF’s member base is different from that of any other organization I know. I can’t count how many times someone has seen me in my EFF hoodie and excitedly approached me to show me their membership card. Our members are passionate about protecting civil liberties online, and being EFF members is part of their identity. They’re opinionated, thoughtful, and they understand the deeper moral issues behind today’s technology policy battles.

Does that sound like the kind of community you’d like to help build? Then we have a job that might be perfect for you.

We’re on the hunt for the newest member of EFF’s leadership team: a director for our fundraising team. Please help us get the word out to folks you know who might be a great fit and help us take our fundraising game to the next level.

This is a dream job for the right candidate. You’ll be leading a rock-solid team of 10 fundraising professionals who have already built a community of over 30,000 card-carrying EFF members around the world. 

We’re looking for someone who can blend the art of managing a team with the skill of effective fundraising. The right person is going to be a compelling communicator in writing and in person, able to paint an inspiring vision for EFF’s diverse community of supporters and for EFF’s development team. The majority of our funding comes from ordinary individuals, and we want someone with the social intuition to communicate well with everyone regardless of their backgrounds. 

We also need someone who can understand EFF’s ethical approach to fundraising. We don’t just advocate for user privacy; we also defend it in our day-to-day practices, refusing to engage in the privacy-invasive practices that are all-too-common in the nonprofit community. We hold the security and privacy of our donors (and potential donors) to the highest standards. Our next Development Director is someone excited by that challenge.

The right candidate might not have been a development director in the past. For example, folks who have a lot of experience in management, foundation funding, and major gifts might have come from a background in nonprofit leadership. Maybe you’ve run your own smaller civil liberties nonprofit and are ready to step away from an executive director role, or maybe you have a background in political fundraising. We’re looking for a broad range of work experience, even if you haven’t held the title of “development director” before.

This role will be part of EFF’s senior leadership team, which guides the organization along with other directors. That’s why it’s so vital that we find the right person. We’re asking folks to help us get the word out by sharing this position and encouraging your qualified friends to apply. We know that if our big network of EFF friends and fans activates to spread the word via social media and other methods, this listing is sure to get in front of the right candidate.

We have awesome benefits and an amazing workplace environment, and you can read more about it and apply on the job description.

South Africa Bans Bulk Collection. Will the U.S. Courts Follow Suit?

The High Court in South Africa has issued a watershed ruling: holding that South African law currently does not authorize bulk surveillance. The decision is a model that we hope other courts, including those in the United States, will follow.

Read the decision here.

As an initial matter, the South African court had no trouble making a legal ruling despite the obvious need for secrecy when discussing the details of state surveillance.  This willingness to consider the merits of the case stands in sharp contrast to the overbroad secrecy claims of the U.S. government, which have, time and time again, successfully blocked consideration of the merits of bulk surveillance in open, public U.S. courts. The South African court based its ruling on a description of the surveillance provided by the government – no more detailed than the descriptions the U.S. government gives of its own  bulk surveillance -- as well as the description in the judgment by the European Court of Human Rights case, Centrum For Rattvisa v. Sweden. And yet, in the U.S. this level of detail has been called insufficient to challenge bulk surveillance.  South Africa is not an outlier.  As the amicus brief by the Center for Democracy and Technology and the Open Technology Institute explains in our Jewel v. NSA case, governments of the United Kingdom, Sweden, Germany, the Netherlands, Finland, France and Norway have all openly discussed bulk surveillance that they engage in, with conversations in both legislatures and open courts, including the European Court of Human Rights. 

The South African court looked to whether there were any current South African laws that authorized bulk surveillance.  The court rejected the government’s claim that bulk surveillance was authorized by general language in South Africa’s National Strategic Intelligence Act which, in several places, authorizes the  government “to gather, correlate, evaluate and analyze domestic and foreign intelligence.” The Court’s response is direct and refreshing: “What is evident is that  nowhere in this text is there any instruction to mine internet communications covertly. ” Later, it confirms: “Nowhere else in the NSIA is there a reference to using interception as a tool of information gathering, still less any reference to bulk surveillance as a tool of information gathering.”

The court then considers several other potentially relevant statutes and finds that none of them clearly authorizes bulk surveillance. It concludes that if the government believes that bulk surveillance is so important, “the least that can be required is a law that says intelligibly that the State can do so.”  

Ultimately, the court rules that more is needed:  “Our law demands such clarity, especially when the claimed power is so demonstrably at odds with the Constitutional norm that guarantees privacy.” 

This is a great ruling for the people of South Africa, with a court firmly recognizing that: “no lawful authority has been demonstrated to trespass onto the privacy rights or the freedom of expression rights of anyone, including South Africans whose communications cross-cross the world by means of bulk interception.”  It then declares that the activities are “unlawful and invalid.”

 The South African ruling should be carefully reviewed here in the United States, both by the judiciary and by lawmakers.  The U.S. law that the government relies upon for its bulk surveillance is similarly opaque.  Section 702 provides: “Notwithstanding any other provision of law, upon the issuance of an order in accordance with subsection (j)(3) or a determination under subsection (c)(2), the Attorney General and the Director of National Intelligence may authorize jointly, for a period of up to 1 year from the effective date of the authorization, the targeting of persons reasonably believed to be located outside the United States to acquire foreign intelligence information.”

As in South Africa, the statute nowhere authorizes bulk surveillance.  The most it authorizes is “acquiring” foreign intelligence information with other provisions requiring “minimization.”  What it does not do with regard to bulk surveillance is, in the words of the South African Court, “say intelligibly that the state can do” bulk surveillance. As in South Africa, such vague provisions simply should not be sufficient to “trespass on the privacy rights or the freedom of expression of anyone.” 

The decision by the South African court also sets an important precedent for how states that operate a wide-ranging surveillance apparatus should consider the privacy concerns of lawyers and journalists, a special protection that the U.S. government often ignores¾especially when it comes to surveillance at the U.S. border.

 We look forward to the American courts recognizing what lawmakers, courts and governments around the world have already recognized – that bulk surveillance is not a secret and that courts are and must be empowered to decide whether it is legal. 

 

Related Cases:  Jewel v. NSA

I made #OpenBSD image for #Pinebook. With some changes to make it more bearable. And its now publicly available. :)

wiki.pine64.org/index.php/1080

More info on changes:
forum.pine64.org/showthread.ph

I'm not a big fan of pre-made images made by an anonymous person, after hours. But as a starter - why not - it may be enough. And anyone is more than welcome to build everything from scratch using official images/sources with a little bit of patching (it's possible, check instructions).

“Worth This Investment”: Memos Reveal the Scope and Racial Animus of GOP Gerrymandering Ambitions

Files from the hard drive backups of the late redistricting mastermind Thomas Hofeller outline plans by top Republican strategists to exploit the creation of “majority-minority” seats and ensure GOP redistricting dominance.

The post “Worth This Investment”: Memos Reveal the Scope and Racial Animus of GOP Gerrymandering Ambitions appeared first on The Intercept.

Researchers Assembled over 100 Voting Machines. Hackers Broke into Every Single One.

A report issued Thursday by some of the country’s leading election security experts found that voting machines used in dozens of state remain vulnerable to hacks and manipulations, warning that that without continued efforts to increase funding, upgrade technology, and adopt of voter-marked paper ballot systems, “we fear that the 2020 presidential elections will realize […]

Why viruses like Herpes and Zika will need to be reclassified: Biotech impact

New findings reveal many different structural models for viruses, which can eventually lead to developing more targeted antiviral vaccines, by improving our understanding of how viruses form, evolve and infect their hosts.

Longest coral reef survey to date reveals major changes in Australia's Great Barrier Reef

An in-depth look at Australia's Great Barrier Reef over the past 91 years concludes that since 1928 intertidal communities have experienced major phase-shifts as a result of local and global environmental change, leaving few signs that reefs will return to their initial state in the near future. The long-term implications of these changes highlight the importance of avoiding phase shifts in coral reefs which may take many decades to repair, if at all.

EFF to HUD: Algorithms Are No Excuse for Discrimination

The U.S. Department of Housing and Urban Development (HUD) is considering adopting new rules that would effectively insulate landlords, banks, and insurance companies that use algorithmic models from lawsuits that claim their practices have an unjustified discriminatory effect. HUD’s proposal is flawed, and suggests that the agency doesn’t understand how machine learning and other algorithmic tools work in practice. Algorithmic tools are increasingly relied upon to make assessments of tenants’ creditworthiness and risk, and HUD’s proposed rules will make it all but impossible to enforce the Fair Housing Act into the future.

What Is a Disparate Impact Claim?

The Fair Housing Act prohibits discrimination on the basis of seven protected classes: race, color, national origin, religion, sex, disability, or familial status. The Act is one of several civil rights laws passed in the 1960s to counteract decades of government and private policies that promoted segregation—including Jim Crow laws, redlining, and racial covenants. Under current law, plaintiffs can bring claims under the Act not only when there is direct evidence of intentional discrimination, but also when they can show that a facially-neutral practice or policy actually or predictably has a disproportionate discriminatory effect, or “disparate impact.” Disparate impact lawsuits have been a critical tool for fighting housing discrimination and ensuring equal housing opportunity for decades. As the Supreme Court has stated, recognizing disparate impact liability “permits plaintiffs to counteract unconscious prejudices and disguised animus” and helps prevent discrimination “that might otherwise result from covert and illicit stereotyping.”

What Would HUD’s Proposed Rules Do?

The defendant’s use of an algorithm wouldn’t merely be a factor the court would consider; it would kill the lawsuit entirely.

HUD’s proposed rules do a few things. They would make it much harder for plaintiffs to prove a disparate impact claim. They would also create three complete defenses related to the use of algorithms that a housing provider, mortgage lender, or insurance company could rely on to defeat disparate impact lawsuits. That means that even after a plaintiff has successfully alleged a disparate impact claim, a defendant could still get off the hook for any legal liability by applying one of these defenses. The defendant’s use of an algorithm wouldn’t merely be a factor the court would consider; it would kill the lawsuit entirely.

These affirmative defenses, if adopted, would effectively insulate those using algorithmic models from disparate impact lawsuits—even if the algorithmic model produced blatantly discriminatory outcomes.

Let’s take a look at each of the three affirmative defenses, and their flaws.

The first defense a defendant could raise under the new HUD rules is that the inputs used in the algorithmic model are not themselves “substitutes or close proxies” for protected classes, and that the model is predictive of risk or some other valid objective. The problem? The whole point of sophisticated machine-learning algorithms is that they can learn how combinations of different inputs might predict something that any individual variable might not predict on its own. And these combinations of different variables could be close proxies for protected classes, even if the original input variables are not.

For example, say you were training an AI to distinguish between penguins and other birds. You could tell it things like whether a particular bird was flightless, where it lived, what it ate, etc. Being flightless isn’t a close proxy for being a penguin, because lots of other birds are flightless (ostriches, kiwis, etc.). And living in Antarctica isn’t a close proxy for being a penguin, because lots of other birds live in Antarctica. But the combination of being flightless and living in Antarctica is a close proxy for being a penguin because penguins are the only flightless birds that live in Antarctica.

In other words, while the individual inputs weren’t close proxies for being a penguin, their combination was. The same thing can happen with any characteristics, including protected classes that you wouldn’t want a model to take into account.

Apart from combinations of inputs, other factors, such as how an AI has been trained, can also lead to a model having a discriminatory effect. For example, if a face recognition technology is trained by using many pictures of men, when deployed the technology may produce more accurate results for men than women. Thus, whether a model is discriminatory as a whole depends on far more than just the express inputs.

HUD says its proxy defense allows a defendant to avoid liability when the model is “not the actual cause of the disparate impact alleged.” But showing that the express inputs used in the model are not close proxies for protected characteristics does not mean that the model is incapable of discriminatory outcomes. HUD’s inclusion of this defense shows that the agency doesn’t actually understand how machine learning works.

The second defense a defendant could raise under HUD’s proposed rules has a similar flaw. This defense shields a housing provider, bank, or insurance company if a neutral third-party analyzed the model in question and determined—just as in the first defense—that the model’s inputs are not close proxies for protected characteristics and is predictive of credit risk or another valid objective. This has the very same problem as the first defense: proving that the express inputs used in an algorithm are not close proxies for one of the protected characteristics—even when analyzed by  a “qualified expert”—does not mean that the model itself is incapable of having a discriminatory impact.

The third defense a defendant could raise under the proposed rules is that a third party created the algorithm. This situation will apply in many cases, as most defendants—i.e.,the landlord, bank, or insurance company—will use a model created by someone else. This defense would protect them even if an algorithm they used had a demonstrably discriminatory impact—and even if they knew it was having such an impact.

There are several problems with this affirmative defense. For one, it gets rid of any incentive for landlords, banks, and insurance companies to make sure that the algorithms they choose to use do not have discriminatory impacts—or to put pressure on those who make the models to work actively to try to avoid discriminatory outcomes. Research has shown that some of the models being used in this space discriminate on the basis of protected classes, like race. One recent study of algorithmic discrimination in mortgage rates, for example, found that Black and Latinx borrowers paid around 5.3 basis points more in interest with online mortgage applications when purchasing homes than similarly situated non-minority borrowers. Given this pervasive discrimination, we need to be creating more incentives to address and root out systemic discrimination embedded in mortgage and risk assessment algorithms, not getting rid of them. 

In addition, it is unclear whether aggrieved parties can get relief under the Fair Housing Act by suing the creator of the algorithm instead, as HUD suggests in its proposal. In disparate impact cases, plaintiffs are required under law to point to a specific policy and show how that policy (usually with statistical evidence) results in a discriminatory effect. In a case decided earlier this year, a federal judge in Connecticut held that a third-party screening company could be held liable for a criminal history screening tool that was relied upon by a landlord and led to discriminatory outcomes. However, disparate impact case law around third-party algorithm creators is sparse. If HUD’s proposed rules are implemented, courts first must decide whether third-party algorithm creators can be held liable under the Fair Housing Act for disparate impact discrimination before they can even reach the merits of a case.  

Even if a plaintiff would be able to bring a lawsuit against the creator of an algorithmic model, the model maker would likely attempt to rely on trade secrets law to resist disclosing any information about how its algorithm was designed or functioned. The likely result would be that plaintiffs and their legal teams would only be allowed to inspect and criticize these systems subject to a nondisclosure order, meaning that it would be difficult to share information about their flaws and marshal public pressure to change the ways the algorithms work. Many of these algorithms are black boxes, and their creators want to keep it that way. That’s part of why it’s so important for plaintiffs to be able to sue the landlord, bank, or insurance company implementing the model: to ensure that these entities have an incentive to stop using algorithmic models with discriminatory effects, even if the model maker may try to hide behind trade secrets law to avoid disclosing how the algorithm in question operates. If HUD’s third-party defense is adopted, the public will effectively be walled off from information about how and why algorithmic models are resulting in discriminatory outcomes—both from the entity that implemented the model and from the creator of the model. Algorithms that affect our rights should be well-known, well-understood, and subject to robust scrutiny, not secretive and proprietary.

HUD claims that its proposed affirmative defenses are not meant to create a “special exemption for parties using algorithmic models” and thereby insulate them from disparate impact lawsuits. But that’s exactly what the proposal will do. HUD says it just wants to make it easier for companies to make “practical business choices and profit-related decisions.” But these three complete defenses will make it all but impossible to enforce the Fair Housing Act against any party that uses algorithmic models going forward. Today, a defendant’s use of an algorithmic model in a disparate impact case would be considered on a case-by-case basis, with careful attention paid to the particular facts at issue. That’s exactly how it should work. HUD’s proposed affirmative defenses are dangerous, inconsistent with how machine learning actually works, and will upend enforcement of the Fair Housing Act going forward.

What is EFF Doing, and What Can You Do?

HUD is currently accepting comments on its proposed rules, due October 18, 2019. EFF will be submitting comments opposing HUD’s proposal and urging the agency to drop these misguided and dangerous affirmative defenses. We hope other groups make their voices heard, too.

Show more
Librem Social

Librem Social is an opt-in public network. Messages are shared under Creative Commons BY-SA 4.0 license terms. Policy.

Stay safe. Please abide by our code of conduct.

(Source code)

image/svg+xml Librem Chat image/svg+xml