Interfacing with a Hardware Security Module on Linux - c

I have to work with an HSM device for security requirements in my project. I am confused about how HSM is interfaced with C on a Linux machine.
How does a user access HSM internal memory for performing different operations with it?

Every HSM vendor supports at least one cryptographic API. PKCS#11 is a particularly common choice, but there are many other options. OpenSSL, for example, supports HSMs through an engine interface.
Often the vendor will expose a proprietary API in addition to the "standard" APIs it implements. The proprietary API typically offers a greater degree of control over key security properties and key usage than is possible to express in the standard APIs.
When using an HSM, one typically issues a command to load a key from a secure store and retrieve a handle to the key object. This handle is the layer of abstraction that allows the HSM to perform the key operations securely without exposing the key material.
With regards to your project, it is important that you don't simply "shove" the HSM somewhere in your solution to make it appear secure. Instead, think long and hard about the security properties of your system and how cryptography may help you defend against attacks. Once you've identified your attack vectors (and your associated cryptographic defences), then consider which cryptographic API can support your use cases. Only then should you select the best vendor from those who support that API.
In my experience, the standard APIs only suffice for simple security systems. For complex projects, it's almost always necessary to work with the proprietary API of a particular vendor. In such cases, lean heavily on the vendor for support and proof-of-concepts before settling on a product that truly meets your needs.

I know this is a year old, but in case someone else runs across it, there is a more detailed discussion at this link:
Digital Signing using certificate and key from USB token
Including some long-form working code that I added. You are also welcome to get my code directly at this link: https://github.com/tkil/openssl-pkcs11-samples
Good luck!

The HSM vendor should have provided you a library. You can use this library to interact with your HSM via PKCS#11 interface. You will need the PKCS#11 header files in you project in order to do that.
Check out this site http://www.calsoftlabs.com/whitepapers/public-key-cryptography.html to get a introduction

Related

In C, why do preferred RDBMS' drivers implement different API, instead of a uniform API?

In Java, mostly different RDBMS' drivers implement JDBC API.
In Python, mostly different RDBMS' drivers implement DB-API2.
In C, although we have ODBC as uniform API for different RDBMS', people in general prefer RDBMS-specific API, such as those provided by libpq, and C connector (I am not sure about sqlite3 vs its ODBC counterpart). Why do the preferred RDBMS' drivers implement different API, instead of a uniform API in C? Is there some inherent difficulty to do so?
Thanks.
Languages like Java and Python provide a higher level abstraction layer over databases so that a generic interface can be used and the underlying database can be changed out if need be. This flexibility comes at a cost of vendor-specific functionality not being exposed.
The C APIs provided by each vendor allow the use of functionality specific to each database. This means vendor lock-in but it also allows you to exploit these vendor specific features and perform vendor specific optimizations.
The Java and Python runtimes most likely use the underlying C APIs internally.

How to build a cloud application and keep portability intact?

Please check the answer and comments of my previous question in order to get a better understanding of my situation. If I use Google DataStore on AppEngine, my application will be tightly coupled and hence loose portability.
I'm working on Android and will be using backend which will reside in the cloud. I need client-cloud communication. How do I build an application maintaining portability. What design patterns, architectural patterns should I be using?
Should I use a broker pattern? I'm perplexed.
Google AppEngine provides JPA based interfaces for its datastore. As long as you are writing your code using JPA APIs, it will be easy to port the same to other datastores (Hibernate for example also implements JPA).
I would ensure that the vendor specific code doesn't percolate beyond a thin layer that sits just above the vendor's APIs. That would ensure that when I have to move to a different vendor, I know exactly which part of code would be impacted.
It u really want to avoid portability issues use google cloud sql instead. If u use the datastore unless its a trivial strucfure you sill not be able to trivially port it eve if you use pure jpa/jdo, because those were really not meant for nosql. Google has particularifies with indexes etc.
Of course sql is more expensive and has size limits
In order to maintain portability for my application, I've chosen Restlet, which offers Restful web apis, over endpoints. Restlet would help me to communicate between server and client.
Moreover, it would not get my application locked in to a particular vendor.

Technology for long-term archiving (LTA) of digitally signed documents

Imagine that you have thousands or millions documents signed in CAdES, XAdES or PAdES format. Signing certificate for end user is typically issued for 1-3 years. After few years, certificate will expire, revocation data (CRLs) required for verification will not be available and original crypto algorithms will not guaranee anything after 10-20 years.
I am courious if there is some mature and ready to use solution for this. I know that this can be handled by archive timestamps, but I need real product which will automatically maintain data required for long term validation, add timestamps automatically, etc.
Could you recommend me some application or library? Is it standalone solution or something what can be integrated with filenet or similar system?
The EU does currently try to endorse Advanced Digital Signatures based on the CAdES, XAdES and PAdES standards. These were specifically designed with the goal of providing the possibility for long-term archiving and validation.
CAdES is based on CMS, XAdES on XML-DSig and PAdES on the signatures defined in ISO 32000-1, which themselves again are based on CMS.
One open source solution for XAdES is the Belgian eid project, you could have a look at that.
These are all standards for signatures, they do not, however, go into detail on how you would actually implement an archiving solution, this would still be up to you.
These are all standards for signatures, they do not, however, go into detail on how you would actually implement an archiving solution, this would still be up to you.
However, this is something what am I looking for. It seems that Belgian eid mentioned above does not address it at all. (I added some clarification to my original question).
You may find this web site helpful. It's an official site even though its pointing to an IP address. The site discusses in detail your problem and offers a great deal of advise in dealing with long term electronic record storage through a Standards based approach.
The VERS standard is quite extensive and fully supports digital signatures and how best to deal with expired signatures.
The standard is also being adopted by leading EDMS/ECM providers.
If I got your question right, our SecureBlackbox components support XAdES, PAdES and CAdES standards and pulls necessary revocation information (and timestamps) and embeds them in to the signature automatically.

in what language should the API be written?

We want to implement an API, we have a database located on a central server, and a network of many computers.
On these computers, several local programs will be developed in the future using different programming languages, some in java, some in perl, C++, ... etc.
These local programs should be able to access the API functions and interact with the database.
So in what language should the API be written ? so that it would be binding to the other languages. Is there any specific architecture that should be implemented ?
Is there any link that would provide useful information about this ?
If the API is pure database access, then a REST web service is a reasonable choice. It allows a (reasonably) easy interface from almost any language, and allows you to choose whatever language you feel is best for writing the actual web service. However, in doing it this way, you're paying the cost of an extra network call per API call. If you put the web service on the same host (or local network) as the database, you can minimize the cost of the network call from the web service to the database, which mitigates the cost of the extra call to the API.
If the API has business logic in it, there's two via approaches...
You can write the API as a library that can be used from multiple languages. C is a good choice for this because most languages can link C libraries, but the languages you expect to use it from can have a large impact, too. For example, if you know it's always going to be used by a language hosted on the JVM, the any JVM language it probably a reasonably good choice.
Another choice is to use a hybrid of the two. A REST API for database access, plus a business layer library written in multiple languages. The idea being that you have business logic on the application end, but it's simple enough that you can write a "client library" in multiple languages that knows how to call out to the REST API and then apply business logic to the results it gets back. Assuming the business logic isn't too complex (ie, limited to ways to merge/view the database data), then this isn't a bad solution.
The benefit is that it should be relatively easy to supply one "default" library that can be used by many languages, plus other language specific versions of the library where you have time available to implement them. For cases where figuring out what calls need to be made to the database, and how to combine the results, can be complicated, I find this to be a reasonably good solution.
I would resort to webservices. Doesn't matter what language you use as long as you have a framework to interact with webservices you are good. Depending on your needs you could expose a simple REST API or go all the way with SOAP/WSDL and the likes.

Silverlight - protecting Content inside a network (DRM?)

I would like to set up some WMV Video Streaming, using Windows 2003's Streaming Media Server and Silverlight.
Now, unfortunately Silverlight only supports HTTP, which means that people can just download the videos. While that in itself is not a problem, I wonder what options there are to prevent them being playable outside of the network.
Of course, DRM comes into mind. Is there an easy way to get it and set it up? I do not want to have some complicated User-Scheme, it essentially boils down to "If you can reach the server (which is only in the internal network), you get a license, otherwise not".
Any experience with WMV DRM or Content Protection in that area?
What would I need on top of Windows 2003 Server and Silverlight 2?
DRM is a negative sum game. You lose money and time in implementing it that you could have spent on something useful to your users, and your content becomes less valuable to your users. It is also impossible to implement effectively. I'm not going to address any specific DRM scheme, but the core of the argument is that in order to show content to the user, the user's computer must be able to decrypt it. Therefore, the decryption code, and the decryption keys, must be present on the user's computer. Encryption can only protect data from interception and tampering between two secure endpoints. If one of the endpoints is compromised (and you are assuming this in your distrust of the user), then cryptographic techniques are useless.
Michael: you could do a few things. You could use IIS7 and create a web playlist which can be protected by SSL certificates to secure the stream. Additionally Silverlight does support a no-touch (from the end user's perspective) DRM scheme we call PlayReady. It does involve having a server to issue the license so that may violate your desire for a no/low cost solution (but DRM solutions rarely are). These are two options though.
In this session the baseball guy talked about making the URL's usable only once. I assume it's not a 100% solution but it could prevent users from copypasting url's.
An alternative to in house DRM is hosted DRM.
We "EZDRM.com" offer a great low cost solution, and still provide you all the features of DRM.

Resources