Swift (Linux): Extract CMS/PKCS#7 Certs and Validate Container Signature? - c

I am writing a set of services in Swift 4 that will run on Linux. One of the things I need to do is receive a payload that is digitally signed using the Cryptographic Message Syntax (CMS) format, extract the certificate used to sign it, and then validate the signature. I know that Swift on Linux doesn't contain a Security or CommonCrypto framework for this sort of thing, so I've linked in OpenSSL to try and help with this. I'm about 2 years removed from my C/C++ programming days, so I readily admit I'm in over my head on this portion of the code.
I have 2 simple classes to act as wrappers for OpenSSL BIO and PKCS7 data structures. They look like this:
import Foundation
import OpenSSL
public final class BIOWrapper {
public var bio = BIO_new(BIO_s_mem())
public init(data: Data) {
data.withUnsafeBytes { pointer -> Void in
BIO_write(self.bio, pointer, Int32(data.count))
}
}
public init() {}
deinit {
BIO_free(self.bio)
}
}
public final class PKCS7Wrapper {
public var pkcs7: UnsafeMutablePointer<PKCS7>
public init(pkcs7: UnsafeMutablePointer<PKCS7>) {
self.pkcs7 = pkcs7
}
deinit {
PKCS7_free(self.pkcs7)
}
}
I am able to successfully extract the PKCS#7 container data and validate that the data type code value is NID_pkcs7_signed using this code:
let reqData = Data(bytes: reqBytes)
guard reqData.count > 0 else {
print("Empty request body")
return nil
}
let bioWrapper = BIOWrapper(data: reqData)
guard let container = d2i_PKCS7_bio(bioWrapper.bio, nil) else {
print("No container")
return nil
}
let pkcs7Wrapper = PKCS7Wrapper(pkcs7: container)
let dataTypeCode = OBJ_obj2nid((pkcs7Wrapper.pkcs7.pointee.d.sign).pointee.contents.pointee.type)
print("dataTypeCode : \(dataTypeCode)")
if dataTypeCode == NID_pkcs7_data {
print("GOT DATA!")
} else {
print("Didn't get data")
return nil
}
let pkcs7SignedTypeCode = OBJ_obj2nid(pkcs7Wrapper.pkcs7.pointee.type)
if let signed = pkcs7SignedTypeCode == NID_pkcs7_signed {
print("Signed : \(signed)")
}
However, I've now reached a point where I'm stuck. How can I obtain the X.509 certificate data from the PKCS#7 payload? I can see that the pkcs7Wrapper.pkcs7.pointee.d.sign.pointee.cert data structure should contain the certificate chain data. Its data type is UnsafeMutablePointer<stack_st_x509> and I think I can figure out the code to use OpenSSL's PKCS7_verify method once I get the X.509 certificate data in memory. I just don't know how to do THAT part.
I found this resource that talks about validating receipts on OSX/iOS that touches on a lot of the same issues. They obtain the X.509 certificate from the file system and pass the data into the PKCS7_verify method. I just need to know how to get the certificate data from the PKCS#7 container to pass in.
Can anyone help me with this? I recognize that calling C from Swift is not ideal, but in the absence of a good security/cryptography framework for Swift I'm not aware of any other options.

The core part of the answer is in the code you linked:
let store = X509_STORE_new()
X509_STORE_add_cert(store, appleRootX509)
OpenSSL_add_all_digests()
let result = PKCS7_verify(receiptPKCS7, nil, store, nil, nil, 0)
if result != 1 {
log.atLevelDebug(id: 0, source: "Main", message: "Receipt signature verification failed")
exit(errorCode)
}
What you seem to be missing is the fact that you don't have to extract the X509 certificate from the PKCS7 data yourself. The PKCS7_verify function will do it as part of verification:
An attempt is made to locate all the signer's certificates, first looking in the certs parameter (if it is not NULL) and then looking in any certificates contained in the p7 structure itself. If any signer's certificates cannot be located the operation fails.
Therefore the only certificate you need to load yourself is the root certificate which you have observed they load from the file system in the linked code.
If you still really need a Swift solution to extract the certificate out of the PKCS7 data for some reason, you will have to build an ASN.1 parser for PKCS7. Not sure if this is readily available for Swift, this simple code is what a quick search yielded, and this is a nice description of the PKCS7 data.

Related

How use openssl to verify a certificate in PEM with a local trust store?

I have a certificate in PEM format. Let say I have a certificate that I copied from google.com.
So, the chain is that
Google Trust Services-GlobalSign Root CA-R2
->Google Internet Authority G3
-->*.google.com
Suppose that I have certificate *.google.com and I want a C program to verify this certificate with my local trust store in Linux. Let say /etc/ssl/certs.
I need to it offline without connecting to the server. What should I do?
The overall OpenSSL documentation around this topic is rather limited and has broken links all over the place, so my approach might not the only one or best one. As far as I can see, verifying a certificate (chain) is done with the following steps, unrolled in reverse order because I think that gives a better understanding. For the resulting code, see the end of this answer. All code has error checking omitted for the sake of brevity. Also, the loading of Certificate Revocation Lists (CRLs) is not explained, I think that is beyond the scope of your question.
The actual verification function
The functionality to verify a certificate (chain) is provided by the OpenSSL function X509_verify_cert(). A return value of 1 indicates successful verification, 0 indicates no success. As you can see in the documentation, the function only requires one parameter of the type X509_STORE_CTX, which is a structure holding the "context" (a rather vague and overused term in OpenSSL, IMO) of the collection of X509 certificates involved.
Setting up the certificate store context
The certificate store context contains information about trusted certificates, untrusted intermediate certificates and the certificate to be verified. It is constructed and initialized as follows:
store_ctx = X509_STORE_CTX_new();
X509_STORE_CTX_init(store_ctx, store, cert, intermediates)
The store parameter will be used to contain information about the trusted certificates, the cert parameter contains the certificate to be verified and the and intermediates parameter is a stack of untrusted intermediate certificates.
The store parameter
The X509_STORE type is able a contain a set of X509 certificates and for the purpose of verifying a certificate needs to be provided with information about trusted certificates. Since you indicated that you have trusted certificates in /etc/ssl/certs, this can be done as follows:
store = X509_STORE_new();
lookup = X509_STORE_add_lookup(store, X509_LOOKUP_hash_dir());
X509_LOOKUP_add_dir(lookup, "/etc/ssl/certs", X509_FILETYPE_PEM);
This assumes that your local trust store is set up properly
The cert parameter`
This parameter contains the actual certificate to be verified. It can be loaded from a file in several ways, one approach is as follows:
bio_in = BIO_new_file(certFileName, "r");
result = PEM_read_bio_X509(bio_in, NULL, NULL, NULL);
BIO_free(bio_in);
The intermediates parameter
OpenSSL provides a stack API to handle collections of objects. The intermediates parameter is a stack of X509 objects that contains the intermediate certificates between your certificate to be tested and your trusted certificate. In pseudo code, it can be filled as follows:
intermediates = sk_X509_new_null();
for (filename in certFilenames) do {
icert = readCert(filename);
sk_X509_push(intermediates, icert);
}
This concludes the explanation, this should give you all you need to verify the chain.
**About the certificate at the end of the downloaded chain
The certificate at the end of the downloaded chain is typically contained in your local trust store. Some experiments show that you can actually feed it into the verify function as if it is an untrusted intermediate or you can omit it. Both seemed to end in a properly verified chain.
A code example
Finally :-)
#include <openssl/x509.h>
#include <openssl/x509_vfy.h>
#include <openssl/pem.h>
const char *trustedCertsPath = "/etc/ssl/certs";
int main(
int argc,
char **argv)
{
X509 *cert = NULL;
X509 *icert = NULL;
STACK_OF(X509) *intermediates = NULL;
X509_STORE *store = NULL;
X509_LOOKUP *lookup = NULL;
X509_STORE_CTX *store_ctx = NULL;
BIO *bio_in = NULL;
int currentArg = 1;
int result = 0;
store = X509_STORE_new();
lookup = X509_STORE_add_lookup(store, X509_LOOKUP_hash_dir());
X509_LOOKUP_add_dir(lookup, trustedCertsPath, X509_FILETYPE_PEM);
/* Certificate to be checked */
bio_in = BIO_new_file(argv[currentArg++], "r");
cert = PEM_read_bio_X509(bio_in, NULL, NULL, NULL);
BIO_free(bio_in);
/* Stack of untrusted intermediate certificates */
intermediates = sk_X509_new_null();
while (currentArg < argc) {
bio_in = BIO_new_file(argv[currentArg++], "r");
icert = PEM_read_bio_X509(bio_in, NULL, NULL, NULL);
BIO_free(bio_in);
sk_X509_push(intermediates, icert);
}
store_ctx = X509_STORE_CTX_new();
X509_STORE_CTX_init(store_ctx, store, cert, intermediates);
result = X509_verify_cert(store_ctx);
printf("Result from X509_verify_cert is %d\n", result);
sk_X509_pop_free(intermediates, X509_free);
X509_STORE_CTX_cleanup(store_ctx);
X509_STORE_CTX_free(store_ctx);
X509_STORE_free(store);
}
You can build and run it as follows (where the .pem arguments are the names of the files containing your certificate and intermediates in PEM format:
$ gcc main.c $(pkg-config openssl --libs) -o verify -Wall
$ ./verify \*.google.com.pem Google\ Internet\ Authority\ G3.pem
Result from X509_verify_cert is 1

meaning of parameters in parse method OWLAPI (building an AST)

I was looking for a good parser for OWL ontologies - initially in Python since I have very limited experience with Java. It seems that OWLAPI is the best choice as far as I can tell, and well, it is Java.
So, I am trying to parse an .owl file and build the AST from it. I downloaded owlapi and I´m having problems with it since it doesn´t seem to have much in terms of documentation.
My very basic question is what do the two first parameters of - say - OWLXMLParser(), stand for:
- document source: Is this the .owl file read as a stream (in getDocument below)?
- root ontology: what goes here? initially I thought that this is where the .owl file goes, seems not to be the case.
Does the parse method construct the AST or am I barking up the wrong tree?
I´m pasting some of my intents below - there are more of them but for I´m trying to be less verbose :)
[The error I´m getting is this - if anyone cares - although the question is more fundamental:
java.lang.NullPointerException: stream cannot be null
at org.semanticweb.owlapi.util.OWLAPIPreconditions.checkNotNull(OWLAPIPreconditions.java:102)
at org.semanticweb.owlapi.io.StreamDocumentSourceBase.(StreamDocumentSourceBase.java:107)
at org.semanticweb.owlapi.io.StreamDocumentSource.(StreamDocumentSource.java:35)
at testontology.testparsers.OntologyParser.getDocument(App.java:72)
at testontology.testparsers.OntologyParser.test(App.java:77)
at testontology.testparsers.App.main(App.java:58)]
Thanks a lot for your help.
public class App
{
public static void main( String[] args )
{
OntologyParser o = new OntologyParser();
try {
OWLDocumentFormat p = o.test();
} catch (Exception e) {
e.printStackTrace();
}
}
}
class OntologyParser {
private OWLOntology rootOntology;
private OWLOntologyManager manager;
private OWLOntologyDocumentSource getDocument() {
System.out.println("access resource stream");
return new StreamDocumentSource(getClass().getResourceAsStream(
"/home/mmarines/Desktop/WORK/mooly/smart-cities/data/test.owl"));
}
public OWLDocumentFormat test() throws Exception {
OWLOntologyDocumentSource documentSource = getDocument();
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
OWLOntology rootOntology = manager.loadOntologyFromOntologyDocument (new FileDocumentSource(new File("/home/mmarines/Desktop/WORK/mooly/smart-cities/data/test.owl")));
OWLDocumentFormat doc = parseOnto(documentSource, rootOntology);
return doc;
}
private OWLDocumentFormat parseOnto(
#Nonnull OWLOntologyDocumentSource initialDocumentSource,
#Nonnull OWLOntology initialOntology) throws IOException {
OWLParser initialParser = new OWLXMLParser();
OWLOntologyLoaderConfiguration config = new OntologyConfigurator().buildLoaderConfiguration();
//// option 1:
//final OWLOntologyManager managerr = new OWLOntologyManagerImpl(new OWLDataFactoryImpl(), new ReentrantReadWriteLock(true));
//final IRI iri = IRI.create("testasdf");
//final IRI version = IRI.create("0.0.1");
//OWLOntologyDocumentSource source = new FileDocumentSource(new File("/home/mmarines/Desktop/WORK/mooly/smart-cities/data/test.owl"));
//final OWLOntology onto = new OWLOntologyImpl(managerr, new OWLOntologyID(iri,version));
//return initialParser.parse(initialDocumentSource, onto, config);
////
//option 2:
return initialParser.parse(initialDocumentSource, initialOntology, config);
}
Click here to Reply or Forward
15.32 GB (13%) of 115 GB used
Manage
Terms - Privacy
Last account activity: 1 hour ago
Details
The owlapi parsers are designed for use by the OWLOntologyManager implementations, which are managed (unless you're writing a new owlapi implementation) by the OWLManager singleton. There's plenty of examples on how to use that class in the wiki pages.
All parsers included in the owlapi distribution are meant to create OWLAxiom instances in an OWLOntology, not create an AST of an owl file - the syntactic shape of the files depends on the specific format, on the preferences of the writer, and so on, while the purpose of the api is to provide ontology manipulation functionality to the caller. The details of the output format can be tweaked but exposing them to the caller is not part of the main design.

javamail throws java.io.UnsupportedEncodingException: unknown-8bit

There were some emails that I try to read using javamail lib. When the email contains the MIME header (Content-Type: text/plain; charset="unknown-8bit"), I get this error: java.io.UnsupportedEncodingException: unknown-8bit
Any ideas why is this happening?
Because "unknown-8bit" is not a known charset name. This is explained in the JavaMail FAQ, along with alternatives for handling this problem. I've copied the answer here but note that this may become out of date. Please be sure to search the JavaMail FAQ for any other JavaMail problems you might have.
Q: Why do I get the UnsupportedEncodingException when I invoke getContent() on a bodypart that contains text data?
A: Textual bodyparts (i.e., bodyparts whose type is "text/plain", "text/html", or "text/xml") return Unicode String objects when getContent() is used. Typically, such bodyparts internally hold their textual data in some non Unicode charset. JavaMail (through the corresponding DataContentHandler) attempts to convert that data into a Unicode string. The underlying JDK's charset converters are used to do this. If the JDK does not support a particular charset, then the UnsupportedEncodingException is thrown. In this case, you can use the getInputStream() method to retrieve the content as a stream of bytes. For example:
String s;
if (part.isMimeType("text/plain")) {
try {
s = part.getContent();
} catch (UnsupportedEncodingException uex) {
InputStream is = part.getInputStream();
/*
* Read the input stream into a byte array.
* Choose a charset in some heuristic manner, use
* that charset in the java.lang.String constructor
* to convert the byte array into a String.
*/
s = convert_to_string(is);
} catch (Exception ex) {
// Handle other exceptions appropriately
}
}
There are some commonly used charsets that the JDK does not yet support. You can find support for some of these additional charsets in the JCharset package at http://www.freeutils.net/source/jcharset/.
You can also add an alias for an existing charset already supported by the JDK so that it will be known by an additional name. You can create a charset provider for the "bad" charset name that simply redirects to an existing charset provider; see the following code. Create an appropriate CharsetProvider subclass and include it along with the META-INF/services file and the JDK will find it. Obviously you could get significantly more clever and redirect all unknown charsets to "us-ascii", for instance.
==> UnknownCharsetProvider.java <==
import java.nio.charset.*;
import java.nio.charset.spi.*;
import java.util.*;
public class UnknownCharsetProvider extends CharsetProvider {
private static final String badCharset = "x-unknown";
private static final String goodCharset = "iso-8859-1";
public Charset charsetForName(String charset) {
if (charset.equalsIgnoreCase(badCharset))
return Charset.forName(goodCharset);
return null;
}
public Iterator<Charset> charsets() {
return Collections.emptyIterator();
}
}
==> META-INF/services/java.nio.charset.spi.CharsetProvider <==
UnknownCharsetProvider

Encrypting db password in application.conf for Play 2.0/anorm

I don't want to put unencrypted passwords in the application config file.
This question: Encrypting db password in application.conf has a great solution for the problem but it works only for Play 1.
Does anybody know a solution that works for Play 2.0? I am using anorm in Scala version of Play 2.0.2.
All efforts are pointless. When we put hashed password in a database is because humans can retain passwords in their brains, and their brains are not readable. It's called asymmetric encryption.
The thing your are talking about is only possible with symmetric encryption: the program has the key at runtime, and uses this key to decrypt the db password. But what's the point in storing the db password encrypted with a key, and still having this key publicly available? (This is true for both java sources and compiled classes). A chain is only as strong as its weakest link.
When a machine has to connect to a db, it needs a password: we store this password in plain text because the program must use it as is, and no human input is required. All we can do to enforce security is to restrict the access to this plain text file, eventually protecting it with a password stored only in the admin's mind (BTW, more likely the admin will keep all of its passwords in a database, maybe with a master password). Note that things don't change if you use the mentioned Play plugin.
The only other thing that comes to my mind is a Play app which only connects to the db when the admin inputs the db password (but really this is only a thinking exercise)
I know it's a bit late but there's not newer discussions about this problema. I want to share the actual solution (Play v.2.5.X), as suggested in documentation, is now possible to override the GuiceApplicationLoader to configure the GuiceApplicationBuilder to process someway the initial configs.
In a new class modules/ApplicationLoaderConfig.scala:
import javax.crypto.Cipher
import javax.crypto.spec.SecretKeySpec
import javax.xml.bind.DatatypeConverter
import play.api.inject.guice._
import play.api.{ApplicationLoader, Configuration}
class ApplicationLoaderConfig extends GuiceApplicationLoader() {
override def builder(context: ApplicationLoader.Context): GuiceApplicationBuilder = {
// Decrypt secrets
val decryptedConfig = context.initialConfiguration ++
Configuration("config.to.descrypt.1" -> decryptDES(context.initialConfiguration.getString("config.to.descrypt.1").get)) ++
Configuration("config.to.descrypt.2" -> decryptDES(context.initialConfiguration.getString("config.to.descrypt.2").get))
initialBuilder
.in(context.environment)
.loadConfig(decryptedConfig)
.overrides(overrides(context): _*)
}
private def decryptDES(secret: String): String = {
val key = "12345678"
val skeySpec = new SecretKeySpec(key.getBytes("UTF-8"), "DES")
val cipher = Cipher.getInstance("DES/ECB/PKCS5Padding")
cipher.init(Cipher.DECRYPT_MODE, skeySpec)
new String(cipher.doFinal(DatatypeConverter.parseBase64Binary(secret)))
}
}
Also add to application.config:
play.application.loader = "modules.ApplicationLoaderConfig"
Thanks to discussion with Raffaele and following my own investigation of the code, it seems that Play 2.0 does not allow you to encrypt DB passwords.
If I missed something please let me know.
EDIT: One can work around the problem by using custom database driver in the following manner:
// Just redirect everything to the delegate
class DelegatingDriver(delegate: Driver) extends Driver
{
def connect(url: String, info: Properties) = delegate.connect(url, info)
def acceptsURL(url: String) = delegate.acceptsURL(url)
def getPropertyInfo(url: String, info: Properties) = delegate.getPropertyInfo(url, info)
def getMajorVersion = delegate.getMajorVersion
def getMinorVersion = delegate.getMinorVersion
def jdbcCompliant() = delegate.jdbcCompliant()
}
// Replace password in properties with the decrypted one
class MyDecryptingDriver extends DelegatingDriver(Class.forName("<my.original.Driver>").newInstance().asInstanceOf[Driver])
{
override def connect(url: String, info: Properties)= {
// copy Properties
val overriddenProperties= clone(info)
// override password property with the decrypted value
Option(info.getProperty("password")).foreach(value => overriddenProperties.setProperty("password", decryptPassword(value)))
super.connect(url, overriddenProperties)
}
def clone(orig: Properties)= {
val result= new Properties()
orig.propertyNames().map(_.asInstanceOf[String]).foreach(pName => result.setProperty(pName, orig.getProperty(pName)))
result
}
def decryptPassword(encrypted: String)= ...
}
then you replace application.conf/db..driver by my.com.MyDecrypting driver. Not perfect but works for me...

Using BlobRequest.CopyFrom fails with 404 Not Found error

Hope you can help.
I'm trying to copy a blob using the Protocol namespace along with a shared access signature, but the WebResponse always throws a 404 Not Found error. I have successfully used the Get/Post/Delete/List methods (where the 404 would be thrown if the permissions were insufficient), but I cannot find the answer here.
Here's some simple code that I am using:
Uri uriFrom = new Uri("file://mymachine/myfile.txt");
Uri uriTo = new Uri("file://mymachine/myfile1.txt");
//get shared access signature - set all permissions for now
uriTo = GetSharedAccessSignature(uriTo, SharedAccessPermissions.Write |
SharedAccessPermissions.Read | SharedAccessPermissions.List);
//NOTE: This returns my uriTo object in the following format:
//http://mystoragespace.blob.core.windows.net/mycontainer/steve1.txt?se=2011-07-04T12:17:18Z&sr=b&sp=rwdl&sig=sxhGBkbDJpe9qn5d9AB7/d2LK1aun/2s5Bq8LAy8mis=
//get the account name
string accountName = uriTo.Host.Replace(".blob.core.windows.net", string.Empty);
//build the canonical string
StringBuilder canonicalName = new StringBuilder();
canonicalName.AppendFormat(System.Globalization.CultureInfo.InvariantCulture,
"/{0}/mycontainer{1}", accountName, uriFrom.AbsolutePath);
//NOTE: my canonical string is now "/mystoragespace/mycontainer/myfile.txt"
//get the request
var request = BlobRequest.CopyFrom(uriTo, 300, canonicalName.ToString(),
null, ConditionHeaderKind.None, null, null);
request.Proxy.Credentials = CredentialCache.DefaultNetworkCredentials;
//perform the copy operation
using (HttpWebResponse response = request.GetResponse() as HttpWebResponse)
{
//do nothing. the file has been copied
}
So, my uriTo seems to have the appropriate permissions (I've tried various combinations) and the canonical string seems to have the correct source string. I'm not using snapshot functionality. The proxy isn't a problem as I've successfully used other methods.
Hope someone can help...
Many regards,
Steve
From Creating a Shared Access Signature:
The following table details which operations are allowed on a resource for a given set of permissions.
...
Create or update the content, block list, properties, and metadata of the specified blob. Note that copying a blob is not supported.

Resources