A very good post @ Good Morning Silicon Valley about privacy and security policies.
The second point is absolutely interesting: it deals with a recently disclosed project about automatically assigning every person entering the US a score, rating the person as a terrorist threat.
The score is based on a list of factors as the “analysis of their travel records and other data, including items such as where they are from, how they paid for tickets, their motor vehicle records, past one-way travel, seating preference and what kind of meal they ordered”.
I won’t go into the privacy issues, which have been discussed in the original post. Anyway, this looks to me pretty much like the “ideal” machine learning scenario.
I believe they trained the system with a set of already categorized examples (yes, people… either bad and good), learning the categorization function returning your very own score as a threath.
This is very interesting from an engineering perspective, although we all know in such problems the error rate is usually non-zero, so the question arises quite easily: what if…?
Hopefully they allowed a very narrow error margin .