Bert Lathrop

Volume 71, Issue 2, 501-534

The relentless accumulation of private consumer information through online services has dramatically expanded the attack surface available to cyber-criminals and belligerent state actors looking to either enrich themselves or disrupt digital service operations. In response to this growing threat and despite sharp criticism from privacy advocates, Congress passed the Cybersecurity Information Sharing Act of 2015 (CISA) with the aim of enabling private parties and the federal government to better protect themselves through improved availability of cyber threat intelligence. This intelligence is generally derived from organizations’ observations of activity on their systems and networks. CISA authorizes private entities, and state, local, and tribal governments, to share cyber threat intelligence with the federal government and among themselves. In exchange, participants are granted immunity from criminal and civil liability for their acts under the statute, and the federal government publishes redacted subsets of the collected intelligence.

Coincidentally, artificial intelligence (AI) has recently emerged as a technology showing great promise in automating many tasks currently performed by humans, and cybersecurity analysis is no exception. CISA, drafted concurrently with this emergence, lacks the data-sharing authorizations necessary to leverage AI’s full utility. Deep learning, the AI technology showing the most promise, requires vast amounts of data providing evidence of normal system and network activity from which anomalous events associated with cyber-attacks can be differentiated. While CISA authorizes the sharing of the requisite data for such analyses in limited circumstances, this Note explores the opportunities AI affords cybersecurity practitioners, explains the shortcomings of CISA with respect to enabling AI to approach its full potential in cybersecurity applications, and offers a remedial proposal to those shortcomings.