Professor Abbott has a variety of research interests that reflect his multi-disciplinary background. He writes on a range of topics related to health law, intellectual property, as well as law and technology. He has published significant works on health care financing, access to medicines, and big data. He has also completed clinical studies in medicine, and legal research projects related to traditional knowledge, integrative health, and emerging technologies. For further information and downloadable papers, please see SSRN.
Prospective students interested in pursuing research matching his scholarly and research interests are welcome to make informal enquiries.
Director of Research (Impact)
Debate with Richard Epstein. "FDA Involvement in Off-Label Drug Use." Jan 13, 2014. Watch.
UCLA School of Medicine Surgery Grand Rounds. "California's Malpractice System: Fair or Failed?" Aug 27, 2014. Watch.
BBC, Free Thinking Radio 3. Robots, Makt Myrkranna. Listen.
Scott Graham. "Patent Law at the AI Crossroads." Law.com. April 1, 2016. View.
"Measles Outbreak: Expert Weighs in on Vaccine Debate." ABC7. February 4, 2015. (Video). Watch.
“Perineal Self-Acupressure for Constipation?” New England Journal of Medicine Journal Watch (Audio). May 7, 2015. Listen.
Basken, Paul. “Supreme Court's Ruling on Gene Patents Opens New Avenues for University Research.” The Chronicle of Higher Education. 17 Jun. 2013. View.
Wharton, David. "Pac-12 Football Coaches Hush-Hush About Hurt Players." Los Angeles Times, 20 Sep. 2012: C1. Print. View.
Find me on campus Room: 3 AB 05
Existing technologies can already automate most work functions, and the cost of these technologies is decreasing at a time when human labor costs are increasing. This, combined with ongoing advances in computing, artificial intelligence, and robotics, has led experts to predict that automation will lead to significant job losses and worsening income inequality. Policy makers are actively debating how to deal with these problems, with most proposals focusing on investing in education to train workers in new job types, or investing in social benefits to distribute the gains of automation. The importance of tax policy has been neglected in this debate, which is unfortunate because such policies are critically important. The tax system incentivizes automation even in cases where it is not otherwise efficient. That is because the vast majority of tax revenue is now derived from labor income, so firms avoid taxes by eliminating employees. More importantly, when a machine replaces a person, the government loses a substantial amount of tax revenue—potentially trillions of dollars a year in the aggregate. All of this is the unintended result of a system designed to tax labor rather than capital. Such a system no longer works once the labor is capital. Robots are not good taxpayers. We argue that existing tax policies must be changed. The system should be at least “neutral” as between robot and human workers, and automation should not be allowed to reduce tax revenue. This could be achieved by disallowing corporate tax deductions for automated workers, creating an “automation tax” which mirrors existing unemployment schemes, granting offsetting tax preferences for human workers, levying a corporate self-employment tax, or increasing the corporate tax rate. We argue the ideal solution may be a combination of these proposals.
In some cases, a computer’s output constitutes patentable subject matter, and the computer rather than a person meets the requirements for inventorship. As such machines become an increasingly common part of the inventive process, they may replace the standard of the person skilled in the art now used to judge nonobviousness. Creative computers require a rethinking of the criteria for inventiveness, and potentially of the entire patent system.
An innovation revolution is on the horizon. Artificial intelligence (AI) has been generating inventive output for decades, and now the continued and exponential growth in computing power is poised to take creative machines from novelties to major drivers of economic growth. A creative singularity in which computers overtake human inventors as the primary source of new discoveries is foreseeable.
Artificial intelligence is part of our daily lives. Whether working as taxi drivers, financial analysts, or airport security, computers are taking over a growing number of tasks once performed by people. As this occurs, computers will also cause the injuries inevitably associated with these activities. Accidents happen, and now computer-generated accidents happen. The recent fatality caused by Tesla’s autonomous driving software is just one example in a long series of “computer-generated torts.” Yet hysteria over such injuries is misplaced. In fact, machines are, or at least have the potential to be, substantially safer than people. Self-driving cars will cause accidents, but they will cause fewer accidents than human drivers. Because automation will result in substantial safety benefits, tort law should encourage its adoption as a means of accident prevention. Under current legal frameworks, manufacturers (and retailers) of computer tortfeasors are likely strictly responsible for their harms. This article argues that where a manufacturer can show that an autonomous computer, robot, or machine is safer than a reasonable person, the manufacturer should be liable in negligence rather than strict liability. The negligence test would focus on the computer’s act instead of its design, and in a sense, it would treat a computer tortfeasor as a person rather than a product. Negligence-based liability would create a powerful incentive to automate when doing so would reduce accidents, and it would continue to reward manufactures for improving safety. In fact, principles of harm avoidance suggest that once computers become safer than people, human tortfeasors should no longer be judged against the standard of the hypothetical reasonable person that has been employed for hundreds of years. Rather, individuals should be measured against computers. To appropriate the immortal words of Justice Holmes, we are all “hasty and awkward” compared to the reasonable computer.
Clinicians still employ a “trial-and-error” approach to optimizing treatment regimens for late-life depression (LLD). With LLD affecting a significant and growing segment of the population, and with only about half of older adults responsive to antidepressant therapy, there is an urgent need for a better treatment paradigm. Pharmacogenetic decision support tools (DSTs), which are emerging technologies that aim to provide clinically actionable information based on a patient’s genetic profile, offer a promising solution. Dozens of DSTs have entered the market in the past fifteen years, but with varying level of empirical evidence to support their value. In this clinical review, we provide a critical analysis of the peer-reviewed literature on DSTs for major depression management. We then discuss clinical considerations for the use of these tools in treating LLD, including issues related to test interpretation, timing, and patient perspectives. There are no primary clinical trials in LLD cohorts. However, in adult populations, newer generation DSTs show promise for the treatment of major depression. Further independent and head-to-head clinical trials are required to further validate this field.
Artificial intelligence has been generating inventive output for decades, and now the continued and exponential growth in computing power is poised to take creative machines from novelties to major drivers of economic growth. In some cases, a computer’s output constitutes patentable subject matter, and the computer rather than a person meets the requirements for inventorship. Despite this, and despite the fact that the Patent Office has already granted patents for inventions by computers, the issue of computer inventorship has never been explicitly considered by the courts, Congress, or the Patent Office. Drawing on dynamic principles of statutory interpretation and taking analogies from the copyright context, this Article argues that creative computers should be considered inventors under the Patent and Copyright Clause of the Constitution. Treating nonhumans as inventors would incentivize the creation of intellectual property by encouraging the development of creative computers. This Article also addresses a host of challenges that would result from computer inventorship, including the ownership of computer-based inventions, the displacement of human inventors, and the need for consumer protection policies. This analysis applies broadly to nonhuman creators of intellectual property, and explains why the Copyright Office came to the wrong conclusion with its Human Authorship Requirement. Finally, this Article addresses how computer inventorship provides insight into other areas of patent law. For instance, computers could replace the hypothetical skilled person that courts use to judge inventiveness. Creative computers may require a rethinking of the baseline standard for inventiveness, and potentially of the entire patent system.
Page Owner: ra0023
Page Created: Tuesday 2 August 2016 14:21:44 by pj0010
Last Modified: Friday 13 October 2017 13:35:57 by ra0023
Expiry Date: Thursday 2 November 2017 14:20:30
Assembly date: Fri Dec 15 00:50:38 GMT 2017
Content ID: 165296