I need to verify a signature on multiple platforms, say Windows and Linux.
I am open to any public key format as long as I can write a platform-specific C code that can verify this signature using the provided public key (the same public key for all platforms).
It looks to me that PKCS#1 ASN.1 DER format is the most standard one, so I assume I should use it for the public key (this article provides some introduction on possible formats).
My problem is importing this public key in Windows.
It looks like this is supported in C# (e.g. see here), but I couldn't find any Windows C/C++ function that can read a public key in PKCS#1 ASN.1 DER or PEM format and convert it to its BLOB structure, probably I didn't search good enough.
A similar stackoverflow question seems to be talking about certificates, while I just need to deal with a bare public key.
Is there any CNG or CryptoAPI function that could read this (or other) format and convert it to either DSA BLOB or RSA BLOB structure? (If given a choice, I'd prefer to use CNG functions instead of CryptoAPI deprecated ones).
Related
The man page for PEM_write_PrivateKey states that it writes the private key using the traditional private key format.
How is this related to PKCS8 and PKCS1?
The documentation that you are referring to does not seem to be accurate (anymore). Nowadays, PEM_write_PrivateKey() does the same thing as PEM_write_PKCS8PrivateKey() for the OpenSSL implementations of RSA, DSA and EC keys.
As often the case with OpenSSL, the source code is a more reliable way to get your information. Looking at PEM_write_bio_PrivateKey(), which is supposed to give the "traditional" format according to that documentation, we see:
int PEM_write_bio_PrivateKey(BIO *bp, EVP_PKEY *x, const EVP_CIPHER *enc,
unsigned char *kstr, int klen,
pem_password_cb *cb, void *u)
{
if (x->ameth == NULL || x->ameth->priv_encode != NULL)
return PEM_write_bio_PKCS8PrivateKey(bp, x, enc,
(char *)kstr, klen, cb, u);
return PEM_write_bio_PrivateKey_traditional(bp, x, enc, kstr, klen, cb, u);
}
Indeed there is mention of a traditional private key write function, but only if the priv_encode method is not implemented for that particular key type. But it actually is implemented for the standard key types. For 1.0.2g, the version you are using according to your comment below, you can see the associated functions here: rsa_priv_encode(), dsa_priv_encode() and eckey_priv_encode(). PKCS#8 is the standard format when writing private keys.
PKCS#8 is capable of capturing multiple kinds of keys. Its format includes the option to store private keys in an encrypted form. This is different from older formats, where the encryption of the key happened at the PEM level, using a weaker encryption schema. See this answer to the SO question Password callback for reading public key with OpenSSL API for a more elaborate explanation on that in the case of RSA, comparing it to PKCS#1.
The "traditional" key format in your question refers to non-PKCS#8 key formats, which are standard in the case of RSA and EC keys and OpenSSL-specific for DSA, but are not uniform. For RSA keys, that happens to be a format often referred to as the PKCS#1 format. See also this answer to the SO question PKCS#1 and PKCS#8 format for RSA private key for more information.
If you want to write in the "traditional" formats, you will have to explicitly invoke the associated functions, like for example PEM_write_RSAPrivateKey(). In this case, the documentation does seem accurate (except for the grammatical error :-) ):
The RSAPrivateKey functions process an RSA private key using an RSA
structure. The write routines uses traditional format.
i need to implement ecdh with 25519 using openssl.
using:
key = EC_KEY_new_by_curve_name(NID_X25519)
fails.
using this:
EVP_PKEY *pkey = NULL;
EVP_PKEY_CTX *pctx = EVP_PKEY_CTX_new_id(NID_X25519, NULL);
EVP_PKEY_keygen_init(pctx);
EVP_PKEY_keygen(pctx, &pkey);
seems to work but i have no idea how to export the public key in uncompressed bin format. or how to import the other sides public key.
any help?
Importing the other side's public key from raw binary format can be done with the EVP_PKEY_new_raw_public_key() function. Man page here:
https://www.openssl.org/docs/man1.1.1/man3/EVP_PKEY_new_raw_public_key.html
Exporting the public key in raw binary format is a little more tricky since there is no function to do it. You can do it in SubjectPublicKeyInfo format using i2d_PUBKEY() described here:
https://www.openssl.org/docs/man1.1.1/man3/i2d_PUBKEY.html
Fortunately the SubjectPublicKeyInfo format has the raw public key as the last 32 bytes of its output. So you can use i2d_PUBKEY() and just use the last the 32 bytes.
In a seperate post, Is it necessary to add a # in front of an SqlParameter name?, a discussion is had about prefixing the ParameterName with the "#" sign.
If you are abstracting all of your ADO access behind interfaces such as IDbCommand and using IDbCommand.CreateParameter() to return instances of IDbDataParameter, is it still correct to prefix the ParameterName with "#".
My gut feeling is no, since the # is required by SqlServer and the point of using interfaces to remove the implementation details.
I'd also suggest this is perhaps why the undocumented feature of automatically checking for the prefix character exists, if you are only using ADO.NET via interfaces and are removed from knowing exactly what kind of database you are using ?
Simply as a demonstration that you need to take this little detail into account even when abstracting, if you look at Microsoft's Data Access Block which for years has provided ADO abstraction using the System.Data.Common base, you'll see that they address this very issue by including virtual method in the abstract base class Database that is then overridden by the provider specific derived classes.
So the base class Database.cs has this method:
/// <summary>Builds a value parameter name for the current database.</summary>
/// <param name="name">The name of the parameter.</param>
/// <returns>A correctly formated parameter name.</returns>
public virtual string BuildParameterName(string name){ return name; }
(if the provider uses positional parameters or has no need of a prefix, there is nothing more to override)
and then the SqlClient specific provider implementation SqlDatabase.cs overrides it as such:
/// <summary>Gets the parameter token used to delimit parameters for the SQL Server database.</summary>
protected char ParameterToken{ get { return '#'; } }
public override string BuildParameterName(string name)
{
if (name == null) throw new ArgumentNullException("name");
if (name[0] != ParameterToken)
return name.Insert(0, new string(ParameterToken, 1));
return name;
}
Notice that this implementation allows the calling code to use sql parameter names with the '#' prefix or not, thus freeing the devs from having to know/remember what the api actually does to the name under the covers.
I don't use the DAAB directly, but their overall approach to abstracting behind the System.Data and System.Data.Common interfaces and classes is a great guideline for small data access api's.
Reader C# project need to persists ~POCO to file. But we are at our debut and changes occurs quite often. Our soft is already used (persisted) by few customers.
I prefer to use XML over anything for many reasons.
I checked many many xml serialization libs.
Many libs stores the specific type and version. I don’t need that.
Many libs do not give us the possibility to serialize by ourself: ie we need an interface to custom load/save data (I see many advantages **)
Some libs forces us to have empty constructor
Some libs only manage public properties
Some libs have many limitations on types (do not support Dictionary, …)
** (advantages of an interface to load/save data)
Easier to manage many versions
Enable to do hardcoded conversion if required (class x -> class y, … )
Easier to not retain old code
I strongly think that for my needs we would better served by using the old way: a bit like deserializing in C++. I think we would be better served by something that would enable us to just add fields and fields name manually instead of using Attributes.
Kind of:
void XmlDeserialize(XmlReader xmlReader)
{
xmlReader.Load((n)=>Version(n)); // or just: _version = xmlReader.LoadInt("Version");
xmlReader.Load((n)=>Name(n));
xmlReader.Load((n)=>EmployeeId(n));
if (Version ==2)
…
If (version == 3)
…
The closest I have found to fit my needs was: DataContractSerializer that supports IExtensibleDataObject, but it is a pain and ass to use.
I question myself if I’m not wrong everywhere? It’s impossible I’m the only one with that need (or this vision). Why is nobody writing any lib for that, and did I miss something somewhere ?
What I think wrongly ? What do you recommend ?
Do you have to use XML reader.load for this? It is WAY easier to create the business objects that represent your XML data, and then deserialize the object, like below (sorry I only found my vb.net version of this):
Public Shared Function ReadFromString(ByVal theString As String, ByVal encoding As System.Text.Encoding, ByVal prohibitDTD As Boolean) As T
Dim theReturn As T = Nothing
Dim s As System.Xml.Serialization.XmlSerializer
s = New System.Xml.Serialization.XmlSerializer(GetType(T))
Dim theBytes As Byte() = encoding.GetBytes(theString)
Using ms As New IO.MemoryStream(theBytes)
Using sTr As New StreamReader(ms, encoding)
Dim sttng As New XmlReaderSettings
'sttng.ProhibitDtd = prohibitDTD
If Not prohibitDTD Then
sttng.DtdProcessing = DtdProcessing.Ignore
sttng.XmlResolver = Nothing
Else
sttng.DtdProcessing = DtdProcessing.Prohibit
End If
Using r As XmlReader = XmlReader.Create(sTr, sttng)
theReturn = CType(s.Deserialize(r), T)
End Using
End Using
End Using
Return theReturn
End Function
You can event get rid of the xmlreadersettings and the encoding if you like. But this way you could keep different business objects for each version you have? Additionally, if you're only adding (and not changing/deleting) objects, you can still use the most recent business object for all versions, and just ignore the missing fields.
I finally decided to use XmlSerialization like this
usage but I hate to be forced to create default constructor and not being able to serialize members (private or public).
I also decided to use ProtoContract when very high speed is necessary.
But my preferred one is DataContractSerializer where it offer xml format (easier to debug), no default constructor needed and can serialize any members.
I've implemented store_mapping extension but it currently uses ObjectAsStringMapping. As a result I can read array values from database but any insert or update causes underlying postgresql driver error "INTEGER[]" is not "VARCHAR".
Is there any way to implement PGSQL arrays in JDO? It looks quite flexible with all that extension points. Any hints on extension points I have to implement are appreciated, thanks in advance!
Edit:
I'm using postgres int8 as a bit field as a "replacement" for arrays after I figured out that I'll be okay with 63 possible values.
Sample class would be:
#PersistenceCapable(detachable="true", table="campaigns")
public class Campaign implements Serializable {
#PrimaryKey
#Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
public Long id;
public List<Integer> regions;
}
And I think I have to implement some mapping from List to java.sql.Array but still didn't figure out how to do that. I could write extension and override default behavior but what extension-point should it be?
Looks like you need to build a custom field strategy to handle the mapping.
The key then is to transform the representation in this case to PostgreSQL array representation, namely a comma separated value (with " escaping text with any special characters but can be used on all values, double quotes are escaped by doubling them). The string is then bracketed betweed { and }. So ARRAY[1,2,3]::int[] becomes '{1,2,3}' or '{"1","2","3"}'