Tuesday, April 28, 2015

IoT's Security - Part 1 - It's about true privacy...

The Internet of Things raises many questions about its security and how security should be embedded or at least addressed. Many big names publish paper over paper on their vision of IoT’s security, the challenges it faces and the various technologies or vendors that emerge and are likely to play a role or to lead the game.

For an old-timer as I am, what is striking is that it seems that because IoT is new and trendy, all related concepts would need to be new as well, or at least reconsidered, including the good old principles and methods that the security folks have developed over the past decades. But of course, there is simply no reason for such a theoretical rupture. Just like there never is such thing as a new economy, because economy is rooted in human action, there never is such thing as a new security – at least as long as machines will continue to be Von Neumann’s ones – because security is rooted in data and people, not in technology.

For a “thing” on the Internet like anything else, the question of security is that of ensuring only who has authorized access to which data actually has, nothing more and nothing less. Every word is however important in such a definition, let’s make a quick review. Many would define security by means of the three “CIA” initials: confidentiality, integrity, and availability – funny that a spying agency full of secrets should have picked up those three for its own acronym…

But they are all three encompassed within the concept of access: integrity is about change, but change of data assumes it disclosed and available. The ‘who’ and the ‘which’ make the core of the famous RBAC concept (Role-Based Access Control): Access is Controlled Based on the Role you are assumed to play at runtime; for instance accounting data is only accessible to accounting people in the company. ‘Authorized’ is key and twofold: it assumes someone grants you some role(s) in consistency with your job position, and it also assumes the machine, the “thing” is so coded that it does enforce fully but only those access rights entitled to that role(s). Subtle, ‘only … actually has’ brings in the negative dimension of security, the need that no one else has access whilst I truly have.

Finally, all this logic is expected to be ‘ensured’, which means that from the theoretical RBAC model down to the device, all due actions during the design and build process are taken to avoid that the final “thing” runs a different model. In other words, controls are enacted all along the development phases to get – reasonable – assurance that neither any bug nor any non-documented feature ends up in the “thing” that could result in defeating or circumventing the securing RBAC model – usually called a security model.

Such principles have emerged long ago, a famous example of historic standard developing such concepts together with the concept of security functions (identification, authentication, access control, audit, imputation) and security assurance being the ITSEC of 1991 – which evolved into the still alive Common Criteria. They are still the root of all security controls nowadays and I would be very surprised that IoT would have anything so specific as to turn this rule upside down.

The attentive reader will have noticed that I did not mention privacy so far. Though rarely seen that way, privacy is in fact a generalization of security, whereby the main difference is not so much in the controls than in the governance and the actors. The definition I gave of security, with RBAC at its core, is well suited for people within a company or an organization. Indeed, in such a closed environment, everyone has a role according to their job. The data belongs to the company and someone can grant or authorize you a role on behalf of the company. Privacy is different in that the data belongs to me and I want – or I would want – to be the one granting or not the role or access rights. But once I am fine with the RBAC model, privacy is nothing but security: the controls and intricacies are the same.

Today in reality, this requirement of individuals to be in the position of granting RBAC rights on privacy data, which is hardly ever implemented, has been balanced by tons of regulations, which all try to provide static authorization models in substitution to actual dynamic citizen authorization. In other word, privacy is security where the rules come from laws instead of from company policies. Many issues on privacy have their roots in this inability to nicely empower the data owners in their RBAC granting needs.

This angle of view on privacy versus ‘traditional’ security is a key step to moving our logic further to the IoT tomorrow. Consider the BYOD issue: today, should my computer be only sourced from my company or should my company accept mine – provided it is secure? The point regarding RBAC is, how can a thing I own be enforced to comply with an RBAC model that my company requires? There are two cases with respect to my computer. Either it is owned by my company which entitles me to its use, or it is mine and I need to accept to abide by the company’s rule – at least when working. It will be the same for all of IoT: either a device is mine or it comes to me from work – or from some form of work eg a non-profit I have contracted with. My point is to highlight the dichotomy between security (company focus) and privacy (individual focus).

Please note that I have not considered the BYOD question from the technical perspective. That is, at this point, the question is not that of possible vulnerabilities coming along with the BYOD, with your own device. There can be vulnerabilities anywhere and I will cover the topic in due time. At this point, what I would like to make clear is that the security model that BYOD – thus IoT – implies cannot be ignored or dismissed if the IoT is ever to be secure. At this point, our view can be summed up as: IoT’s security is a privacy issue where the data to be protected needs to be made explicit ahead of runtime, the user should be empowered to grant access rights to such data and such rules should be assured to be correctly implemented and not leading to vulnerabilities or hidden backdoors.

This may seem pretty basic or obvious but actually it raises significant challenges. Because it means that the companies designing the IoT devices have to build in security features that rely on a security model which is not theirs to end-to-end control – access to a SaaS application would be an example where the provider has end-to-end control of the security model, and the user only can abide by it.

In a next development, let’s try to clarify what this means in terms of architecture of the security model that IoT infers…

Tuesday, April 21, 2015

Untel sécurise mon coffee

Ce matin j'assiste à la RSA Conférence à la présentation du patron de Intel Security ex McAfee qui base son discours sur la promotion des statistiques issues du big data comme le prochain paradigme, le prochain nirvana de la cyber sécurité.
Il fait la comparaison avec le baseball où il y a quelques 10 ans les Oakland Athletics se sont fait connaître et ont failli gagner la compétition grâce un emploi totalement nouveau des statistiques dans leur sport.
Et donc sa thèse consiste à pousser l'idée que nous devrions espérer de tels progrès dans notre métier. Cela me fait penser à tous ces économistes qui pensent qu'on peut mettre l'économie et le monde en chiffres. Dans les deux cas ils oublient de se demander si le domaine qu'ils abordent se prête à la statistique. Et dans les deux cas la réponse est non.
Certes on peut espérer des résultats dans la détection des attaques. Et encore faut-il avoir une idée des modes d’attaque pour chercher avec pertinence. Ce n’est guère de la statistique mais de la recherche dans les masses de données. La statistique n'est qu'un outil elle n'est pas la solution.
Mais ce n'est pas le centre de notre sujet. Si le big data peut aider à la détection, il ne peut en rien aider à la réduction des problèmes en amont.
On oublie encore que les attaquants ne peuvent passer à travers un logiciel que lorsque celui-ci a des bugs et qu'une fois un bug trouvé, son exploitation est systématique. En d'autres mots, le domaine où réside le vrai défi de la sécurité à savoir le développement d'applications et de systèmes sûrs, ne donne aucune prise à la statistique. Et donc annoncer celle-ci comme le prochain âge de la cybersécurite n'est rien d'autre qu'une plaisanterie ridicule.