Betsy DeVos strikes out — in court

Education Secretary Betsy DeVos’ attempts to swiftly roll back major Obama-era policies at her agency are hitting a roadblock: federal courts. Judges have rebuffed DeVos’ attempts to change Obama policies dealing with everything from student loan forgiveness to mandatory arbitration agreements to racial disparities in special education programs. … “It speaks to the Department of Education’s unwillingness or inability to follow the basic law around how federal agencies conduct themselves,” said Toby Merrill, who directs the Harvard Law School’s Project on Predatory Student Lending, which has brought some of the lawsuits against DeVos. Every administration has wins and losses in court, Merrill said, but most have done better at making sure they follow the legal rules of the road for rulemaking. “At the very least, they cross their Ts and dot their Is and therefore are less vulnerable to some of the procedural challenges that have been the undoing of so many of this Department of Education’s policies,” she said.

A.I. Can Improve Health Care. It Also Can Be Duped.

Last year, the Food and Drug Administration approved a device that can capture an image of your retina and automatically detect signs of diabetic blindness. This new breed of artificial intelligence technology is rapidly spreading across the medical field, as scientists develop systems that can identify signs of illness and disease in a wide variety of images, from X-rays of the lungs to C.A.T. scans of the brain. These systems promise to help doctors evaluate patients more efficiently, and less expensively, than in the past. …Ideally, such systems would improve the efficiency of the health care system. But they may carry unintended consequences, a group of researchers at Harvard and M.I.T. warns. In a paper [co-authored by Jonathan Zittrain] published on Thursday in the journal Science, the researchers raise the prospect of “adversarial attacks” — manipulations that can change the behavior of A.I. systems using tiny pieces of digital data. By changing a few pixels on a lung scan, for instance, someone could fool an A.I. system into seeing an illness that is not really there, or not seeing one that is.

Talking Headways Podcast: The Potential of a Fiberoptic Future

This week, we’re joined by Susan Crawford, the John A. Reilly Clinical Professor of Law at Harvard School of Law. Crawford talks about her new book Fiber, which focuses on how cities in the United States are trying to build communications networks with this seemingly limitless technology, yet still get pushback from regulators and incumbent companies alike.

Sanders campaign unionization raises questions about strikes and conflicts of interest

United Food and Commercial Workers Local 400 are organizing the campaign workers for Sen. Bernie Sanders’, I-Vt., presidential campaign, but they are not endorsing his bid. The union can’t, in fact, because it has to represent the interests of the workers and not management, even though the workers all presumably want Sanders to win. In fact, the union will even push for the workers’ having the right to go on strike against the campaign. … Wilma Liebman, former chairwoman of the National Labor Relations Board, said that just because organizing a campaign staff is novel, there’s no reason why it cannot be done.“Collective bargaining can be very flexible and adapted to the parties’ needs,” said Liebman, now senior research associate at Harvard Law School’s labor and worklife program. “Some contracts are lengthy, spelling out detailed rules and procedures. Some are just a few pages long, setting out just basic values and principles.”

This big Supreme Court case has united business, labor and immigration groups. But some see a right wing attack on government regulation

An unusual coalition of business, labor and immigration rights groups wants to change the way federal regulators interpret their own rules — but that effort has sparked fears that consumer and worker protections could be gutted in the process. The fight is due to play out in a Supreme Court argument set for Wednesday. The case involves James Kisor, a Marine veteran who is demanding that the Department of Veterans Affairs provide him with retroactive disability payments for post-traumatic stress disorder he developed while serving in brutal battles in Vietnam. …And they say that, because much of the law that applies to regulation is interconnected, any broad ruling striking down Auer could have unintended consequences. “Cooking up a new approach to precedent yields a toxic brew that can be harmful even to its creators,” wrote Adrian Vermeule, a professor at Harvard Law School.

Large Medical Bills Defying State Law

Health care providers in Mississippi continue to break the law by sending patients large, out-of-pocket medical bills that they don’t have to pay, concludes a Harvard Law School report recently released. The Legislature passed a law in 2013 to prohibit what is known as “balance billing” – when a provider bills a patient for the difference between the initial charges and the amount paid after insurance benefits are assigned. The Center for Health Law and Policy Innovation of Harvard Law School found that Mississippi’s anti-balance billing law needs revising. …In its report, the Center for Health Law and Policy Innovation of Harvard Law School found that Mississippi’s anti-balance billing law, which was one of the first and strongest enacted in the country, needs revising. “Despite the state’s leadership on this issue, Mississippians like Mills report that they are still receiving balance bills — in violation of state law. In fact, a January 2019 poll reported that four in 10 Mississippians have received or have a family member who received a surprise medical bill,” the report reads.

The good, the bad and the ugly in the fight over emoluments

A hearing in the Fourth Circuit on appeal of a lower court ruling allowing the District of Columbia and Maryland to sue President Trump under the Constitution’s emoluments clause went, to put it mildly, poorly. All three judges are GOP appointees, one by Trump himself.  … Constitutional scholar Larry Tribe conceded, “It’s always treacherous to read too much into the questions counsel are asked by appellate judges, but the questions the Fourth Circuit panel asked … voicing skepticism — unwarranted, in my view — about harm to the State of Maryland and to DC suggest to me that these judges, unlike District Court Judge [Peter J.] Messitte, might have been readier to support standing for the competing hotels and restaurant workers like those who joined the CREW lawsuit that’s currently on review in the Second Circuit, where the district court failed to grant standing.” Tribe is co-counsel in the Second Circuit suit.

Legal reviews of weapons, means and methods of warfare involving artificial intelligence: 16 elements to consider

An op-ed by Dustin LewisWhat are some of the chief concerns in contemporary debates around legal reviews of weapons, means or methods of warfare involving techniques or tools related to artificial intelligence (AI)? One session of the December 2018 workshop on AI at the frontiers of international law concerning armed conflict focused on this topic. In this post, I outline a few key threshold considerations and briefly enumerate 16 elements that States might consider as part of their legal reviews involving AI-related techniques or tools.It is imperative, in general, for States to adopt robust verification, testing and monitoring regimes as part of the process to determine and impose limitations and—as warranted—prohibitions in respect of an employment of weapons, means or methods of warfare. Where AI-related techniques or tools are—or might be—involved, the design and implementation of legal review regimes might pose particular kinds and degrees of challenges as well as opportunities.

Expert views on the frontiers of artificial intelligence and conflict

Recent advances in artificial intelligence have the potential to affect many aspects of our lives in significant and widespread ways. Certain types of machine learning systems—the major focus of recent AI developments—are already pervasive, for example in weather predictions, social media services and search engine results, online recommendation systems. Machine learning is also being applied to complex applications that include predictive policing in law enforcement and ‘advice’ for judges when sentencing in criminal justice. Meanwhile, growing resources are being allocated to developing other AI applications. …We asked some of the experts to distill—in under 300 words—some of the key issues and concerns that they believe we aren’t thinking enough about now when it comes to the future on AI and armed conflict. …Naz K. Modirzadeh, Founding Director & Dustin A. Lewis, Senior Researcher, Harvard Law School Program on International Law and Armed Conflict. “Looking to the future of artificial intelligence and armed conflict, those of us concerned about international law should prioritize (among other things) deeply cultivating our own knowledge of the rapidly changing technologies. And we should make that an ongoing commitment. There is a perennial question about subject-matter expertise and the law of armed conflict; consider cyber operations, weaponeering and nuclear technology. When it comes to the increasingly impactful and diverse suite of techniques and technologies labeled ‘AI’, the concern takes on a different magnitude and urgency. That’s in no small part because commentators have assessed that AI has the potential to transform armed conflict—and not just the conduct of hostilities.

Supreme Court Isn’t Sold on the Harms of Big Tech

An op-ed by Noah FeldmanEuropean regulators are cracking down on the big technology companies — witness the 1.49 billion euro ($1.7 billion) fine against Google on Wednesday. At the same time, however, the U.S. Supreme Court is cautiously entering the business of protecting them. In a decision that on the surface looks minor, but is actually an important signal, the court on Wednesday sent a class-action suit against Google back to the lower courts to determine whether any of the plaintiffs actually had standing to sue.  The justices’ action strongly hinted that a majority thinks the suit should never have been allowed to go forward in the first place. If that’s right, it’s an affirmation of the court’s new constitutional idea that it isn’t enough for Congress to create new legal rights that can be used against tech companies. There must be what the court calls “concrete” harm to users. That determination, crucially, rests with the Supreme Court, not with Congress.