Video: AI in Computer Security

I nearly didn’t write a blog post this week. I have a full list of topics to write about, but they’re pretty heavy and sometimes it’s nice to have some down time.

Happily, this morning, a very good friend shared James Mickens’ Keynote at USENIX this year with me (see video). It’s amazing. USENIX is the annual meeting of the Advanced Computing Systems Association. James Mickens is an Associate Professor at Harvard and is clearly very good at what he does.

There are two messages in the video.

1 ) AI suffers from domain creep.

AI (or more accurately Machine Learning) is statistical learning from data. As I have spoken about in many talks and indeed written about, this suffers from a number of problems.

AI is really good at statistical learning from data. But data is not a sufficient basis for good decision making. It suffers from bias.

According to Mickens, and I agree with his classification, Machine Learning practitioners break down into two camps:

  1. Those who do not understand how ML techniques work, and
  2. Those who don’t care that they do not understand how ML techniques work.

AI/ML outputs are not interpretable.

Mickens makes an excellent argument that connecting inscrutable systems to the internet, as a data source, and to decision making about healthcare, criminal justice decisions, etc. is a terrible idea.

There is nothing inherently wrong with AI/ML. What is wrong is how we choose to use it.

2) We need a more holistic view of computer security.

Clearly, this topic is not my main interest, although it is something I too have given some thought to. Mickens’ description that traditional approaches to computer security which involved analysis of a system as to how or what an attacker might be able to do and how to prevent them from doing it is very clearly stated.

But then he smoothly transitions to a more holistic view of the impact of technology on society.

The current value system in technology (Technological Manifest Destiny) is basically:

  1. Technology is value-neutral.
  2. New kinds of technology should be rolled out immediately, before we know what the societal impact might be.
  3. History has nothing to teach us.

Mickens’ uses two examples to show why combining this value system with either statistically based AI or the IoT leads to truly terrible outcomes.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.