Microsoft Crypto API Disable Use of RSAES-OAEP Key Transport Algorithm - c

I'm using CryptEncryptMessage to generate a PKCS#7 enveloped message. I'm using szOID_NIST_AES256_CBC as the encryption algorithm.
The generated message appears to be valid but is the RSAES-OAEP for the Key Transport Algorithm which has limited support in the wild (Thunderbird, OpenSSL SMIME Module among many others don't support it).
I'll like for CAPI to revert to the older RSAencryption for key transport.
Is there any possible way to do that, I could revert to the low level messaging functions if there is a way rather than to use CryptEncryptMessage but I can't find a way to do that even using the low level functions.
Code:
CRYPT_ENCRYPT_MESSAGE_PARA EncryptMessageParams;
EncryptMessageParams.cbSize = sizeof(CMSG_ENVELOPED_ENCODE_INFO);
EncryptMessageParams.dwMsgEncodingType = PKCS_7_ASN_ENCODING;
EncryptMessageParams.ContentEncryptionAlgorithm.pszObjId = szOID_NIST_AES256_CBC;
EncryptMessageParams.ContentEncryptionAlgorithm.Parameters.cbData = 0;
EncryptMessageParams.ContentEncryptionAlgorithm.Parameters.pbData = 0;
EncryptMessageParams.hCryptProv = NULL;
EncryptMessageParams.pvEncryptionAuxInfo = NULL;
EncryptMessageParams.dwFlags = 0;
EncryptMessageParams.dwInnerContentType = 0;
BYTE pbEncryptedBlob[640000];
DWORD pcbEncryptedBlob = 640000;
BOOL retval = CryptEncryptMessage(&EncryptMessageParams, cRecipientCert, pRecipCertContextArray, pbMsgText, dwMsgTextSize, pbEncryptedBlob, &pcbEncryptedBlob);

The Key Transport Algorithm is a bit tricky to handle, and it may not serve its purpose (I see you noted that you'd like CAPI to support RSAencryption; trust me, I would too). It looks like you've alaready detected the bulk of your problem - The generated message appears is valid, but your method makes it necessary to use CryptEncryptMessage, which won't work well/at all in the long run.
Step 1 - Examine the Code
CRYPT_ENCRYPT_MESSAGE_PARA EncryptMessageParams;
EncryptMessageParams.cbSize = sizeof(CMSG_ENVELOPED_ENCODE_INFO);
EncryptMessageParams.dwMsgEncodingType = PKCS_7_ASN_ENCODING;
EncryptMessageParams.ContentEncryptionAlgorithm.pszObjId = szOID_NIST_AES256_CBC;
EncryptMessageParams.ContentEncryptionAlgorithm.Parameters.cbData = 0;
EncryptMessageParams.ContentEncryptionAlgorithm.Parameters.pbData = 0;
EncryptMessageParams.hCryptProv = NULL;
EncryptMessageParams.pvEncryptionAuxInfo = NULL;
EncryptMessageParams.dwFlags = 0;
EncryptMessageParams.dwInnerContentType = 0;
BYTE pbEncryptedBlob[640000];
DWORD pcbEncryptedBlob = 640000;
BOOL retval = CryptEncryptMessage(&EncryptMessageParams, cRecipientCert, pRecipCertContextArray, pbMsgText, dwMsgTextSize, pbEncryptedBlob, &pcbEncryptedBlob);
Pretty basic, isn't it? Although efficient, it's not really getting the problem done. If you look at this:
EncryptMessageParams.dwFlags = 0;
EncryptMessageParams.dwInnerContentType = 0;
you will see that it is pre-defined, but used only in the definition of retval. However, I could definitely see this as a micro-optimization, and not really useful if we're going to re-write the code. However, I've outlined the basic steps blow to integrate this without a total re-do of the code (so you can keep on using the same parameters):
Step 2 - Editing the Parameters
As #owlstead mentioned in his comments, the Crypto API is not very user-friendly. However, you've done a great job with limited resources. What you'll wanna add is a Cryptographic Enumeration Provider to help narrow down the keys. Make sure you have either Microsoft Base Cryptographic Provider version 1.0 or Microsoft Enhanced Cryptographic Provider version 1.0 to use these efficiently. Otherwise, you'll need to add in the function like so:
DWORD cbName;
DWORD dwType;
DWORD dwIndex;
CHAR *pszName = NULL;
(regular crypt calls here)
This is mainly used to prevent the NTE_BAD_FLAGS error, although technically you could avoid this with a more low-level declaration. If you wanted, you could also create a whole new hash (although this is only necessary if the above implementation won't scale to the necessary factor of time/speed):
DWORD dwBufferLen = strlen((char *)pbBuffer)+1*(0+5);
HCRYPTHASH hHash;
HCRYPTKEY hKey;
HCRYPTKEY hPubKey;
BYTE *pbKeyBlob;
BYTE *pbSignature;
DWORD dwSigLen;
DWORD dwBlobLen;
(use hash as normal w/ crypt calls and the pbKeyBlobs/Signatures)
Make sure to vaildate this snippet before moving on. You can do so easily like so:
if(CryptAcquireContext(&hProv, NULL, NULL, PROV_RSA_FULL, 0)) {
printf("CSP context acquired.\n");
}
If you're documenting or releasing, might want to add a void MyHandleError(char *s) to catch the error so someone who edits but fails can catch it quickly.
By the way, the first time you run it you'll have to create a new set because there's no default. A nice one-liner that can be popped into an if is below:
CryptAcquireContext(&hCryptProv, NULL, NULL, PROV_RSA_FULL, CRYPT_NEWKEYSET)
Remember that syncing server resources will not be as efficient as doing the re-work I suggested in the first step. This is what I will be explaining below:
Step 3 - Recode and Relaunch
As a programmer, re-coding might seem like a waste of time, but it can definitely help you out in the long run. Remember that you'll still have to code in the custom params when encoding/syncing; I'm not going to hand-feed you all the code like a baby. It should be well sufficient to show you the basic outlines.
I'm definitely assuming that you're trying to handle to the current user's key container within a particular CSP; otherwise, I don't really see the use of this. If not, you can do some basic edits to suit your needs.
Remember, we're going to bypass CryptEncryptMessage by using CryptReleaseContext, which directly releases the handle acquired by the CryptAcquireContext function. Microsoft's standard on the CAC is below:
BOOL WINAPI CryptAcquireContext(
_Out_ HCRYPTPROV *phProv,
_In_ LPCTSTR pszContainer,
_In_ LPCTSTR pszProvider,
_In_ DWORD dwProvType,
_In_ DWORD dwFlags
);
Note that Microsoft's scolding you if you're using a user interface:
If the CSP must display the UI to operate, the call fails and the NTE_SILENT_CONTEXT error code is set as the last error. In addition, if calls are made to CryptGenKey with the CRYPT_USER_PROTECTED flag with a context that has been acquired with the CRYPT_SILENT flag, the calls fail and the CSP sets NTE_SILENT_CONTEXT.
This is mainly server code, and the ERROR_BUSY will definitely be displayed to new users when there are multiple connections, especially those with a high latency. Above 300ms will just cause a NTE_BAD_KEYSET_PARAM or similar to be called, due to the timeout without even a proper error being received. (Transmission problems, anyone with me?)
Unless you're concerned about multiple DLL's (which this doesn't support due to NTE_PROVIDER_DLL_FAIL errors), the basic set up to grab crypt services clientside would be as below (copied directly from Microsoft's examples):
if (GetLastError() == NTE_BAD_KEYSET)
{
if(CryptAcquireContext(
&hCryptProv,
UserName,
NULL,
PROV_RSA_FULL,
CRYPT_NEWKEYSET))
{
printf("A new key container has been created.\n");
}
else
{
printf("Could not create a new key container.\n");
exit(1);
}
}
else
{
printf("A cryptographic service handle could not be "
"acquired.\n");
exit(1);
}
However simple this may seem, you definitely don't want to get stuck passing this on to the key exchange algorithm (or whatever else you have handling this). Unless you're using symmetric session keys (Diffie-Hellman/KEA), the exchange keypair can be used to encrypt session keys so that they can be safely stored and exchanged with other users.
Someone named John Howard has written a nice Hyper-V Remote Management Configuration Utility (HVRemote) which is a large compilation of the techniques discussed here. In addition to using the basic crypts and keypairs, they can be used to permit ANONYMOUS LOGON remote DCOM access (cscript hvremote.wsf, to be specific). You can see many of the functions and techniques in his latest crypts (you'll have to narrow the query) on his blog:
http://blogs.technet.com/b/jhoward/
If you need any more help with the basics, just leave a comment or request a private chat.
Conclusion
Although it's pretty simple once you realize the basic server-side methods for hashing and how the client grabs the "crypts", you'll be questioning why you even tried the encryption during transmits. However, without the crypting clientside, encrypts would definitely be the only secure way to transmit what was already hashed.
Although you might argue that the packets could be decrypted and hashed off the salts, consider that both in-outgoing would have to be processed and stored in the correct timing and order necessary to re-hash clientside.

Related

How to detect unreleased lock in multi-task C project using static analysis tools?

Is there any way, using static analysis tools(I'm using Codesonar now), to detect unreleased lock problems (something like unreleased semaphores) in the following program?(The comment part marked by arrows)
The project is a multi-task system using Round-robin scheduling, where new_request() is an interrupt task comes randomly and send_buffer() is another period task.
In real case, get_buffer() and send_buffer() are various types of wrappers, which contains many call layers until actual lock/unlock process. So I can't simply specify get_buffer() as lock function in settings of static analysis tool.
int bufferSize = 0; // say max size is 5
// random task
void new_request()
{
int bufferNo = get_buffer(); // wrapper
if (bufferNo == -1)
{
return; // buffer is full
}
if (check_something() == OK)
{
add_to_sendlist(bufferNo); // for asynchronous process of send_buffer()
}
else // bad request
{
// ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓
// There should be clear_buffer placed here
// but forgotten. Eventually the buffer will be
// full and won't be cleared since 5th bad request comes.
// ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑
do_nothing();
// clear_buffer(bufferNo);
}
}
int get_buffer()
{
if(bufferSize < 5)
{
bufferSize++;
return bufferSize;
}
else
{
wait_until_empty(); // wait until someone is sent by send_buffer()
return -1;
}
}
// clear specifiled one in buffer
void clear_buffer(int bufferNo)
{
delete(bufferNo)
bufferSize--;
}
// period task
void send_buffer()
{
int sent = send_1st_stuff_in_list();
clear_buffer(sent);
}
yoyozi - Fair disclosure: I'm an engineer at GrammaTech who works on CodeSonar.
First some general things. The relevant parts of the manual for this are on the page: codesonar/doc/html/C_Module/LibraryModels/ConcurrencyModelsLocks.html. Especially the bottom of the page on Resolving Lock Operation Identification Problems.
Based on your comments, I think you have already read this, since you address setting the names in the configuration settings.
So then the question is how many different wrappers do you have? If it is only a few, then the settings in the configuration file are the way to go. If there are many, that gets tedious. And if there are very many it becomes practically impossible.
So knowing some estimate for how many wrapper sets you have would help.
Even with the wrappers accounted for, it may be that the deadlock and race detectors aren't quite what you need for your problem.
If I understand your issue correctly, you have a queue with limited space, and by accident malformed items don't get cleaned out of the queue, and so the queue gets full and that stalls all processing. While you may have multiple threads involved in this implementation, the issue itself would still be a problem in a basically serial setting.
The best way to work with an issue like this is to try and make a simpler example that displays the same core problem. If you can do this in a way that can be shared with GrammaTech, we can work with you on ways to adjust settings or maybe provide hints to the analysis so it can find this issue.
If you would like to talk about this in more detail, and with prodetction against public disclosure of your code, please contact us at support_at_grammatech_dot_com, where the at and dot should be replaced as needed to make a well formed email address.

How to perform a catalog update correctly in FDT 1.x?

I have heard rumours that performing a catalog update correctly in FDT 1.x is quite complex. There seem to be more than the obvious steps, which are in pseudo code:
foreach (progid in Registry having component category "FDT DTM")
{
dtm = CoCreateInstance(progid);
StartDTMAccordingStateMachine(dtm);
info = dtm.GetInformation("FDT");
catalog.Add(info);
ShutdownDTMAccordingStateMachine(dtm);
Release(dtm);
}
I could not find any hints in the FDT specification that would require a more complex catalog update procedure, so are the rumours true? What makes a correct catalog update procedure so complex?
Basically the idea for the catalog update is correct. Unfortunately the rumours are also true: doing a catalog update involves some more thoughts, as there are:
Frame application interface considerations
During the catalog update, the DTM is not part of a project yet. Therefore the frame application could be implemented without project specific interfaces such as IFdtTopology or IFdtBulkData. However, many DTMs will query for those interfaces immediately and throw an exception if the frame application does not support those interfaces.
Also, during the catalog update, the frame application could expect that the DTM works without user interface, because this is a batch operation which should not require user interaction. This means the frame application could be implemented without the IFdtActiveX and IFdtDialog interfaces. Unfortunately there are also DTMs that use those interfaces during catalog update time.
.NET considerations
Doing a catalog update on a system with many DTMs installed could require a lot of memory. Therefore some frame applications do the catalog update in an external process. While this is a good idea, you need to consider the FDT .NET specifications and best practice documents.
The base line here is: the external process must be a .NET 2.0 process, independent of the actual implementation technology of your frame application. If you have a C++ implementation, you'll need a very small .NET 2.0 object being loaded before any DTM is started.
Memory considerations
Since FDT 1.x is a conglomerate of COM and .NET, there will be pinned objects. This makes it likely that your application suffers from small object heap fragmentation. In addition FDT passes XMLs as strings which makes it more likely that your application suffers from large object heap fragmentation. The overall combination is very dangerous.
One solution might be to start a limited number of DTMs in the same process and then restart the process, e.g. like
updateprocess = StartProcess();
dtmCount = 0;
foreach (progid in Registry having component category "FDT DTM")
{
dtmCount++;
if (dtmCount % 10 == 0)
{
// restart process to avoid out of memory situation
updateProcess.SignalShutdown();
updateProcess.WaitForExit();
updateProcess = StartProcess();
}
updateProcess.StartDTM(progid);
info = updateProcess.GetDtmInformation();
catalog.Add(info);
updateProcess.ShutdownDTM();
}
In the update process you'll need to create the COM object and follow the state machine etc.
FDT 1.2.1 scanning information
In FDT 1.2.1, additional information was introduced to better recognize device during a hardware scan. Although there is no fully FDT 1.2.1 compliant DTM at the time of writing, many FDT 1.2.0 DTMs implement the additional interface IDtmInformation2 to support device detection.
For you as the frame application developer, this means that you have to extend the GetDtmInformation() method in the update process:
T GetDtmInformation()
{
var result = new T(); // a type defined by you
result.info = dtm.GetInformation();
foreach (deviceType in result.info)
{
foreach (protocol in deviceType)
{
deviceInfo = dtm.GetDeviceIdentificationInformation(deviceType, protocol);
result.deviceinfo.Add(deviceInfo);
}
}
}
Schema path updates
FDT 1.2.0 had the problem that the user needed to install XDR schema definitions manually, which was very uncomfortable. FDT 1.2.1 solves this problem in the way that the DTM can now bring XDR schemas with it. The definition is in the XML from GetInformation() at the XML elements <FDT>, <DtmInfo>, <DtmSchemaPaths>. The DTM will publish a directory name there. In theory, this is an easy task: to install the XDR schemas, we need to update the GetDtmInformation() a little bit:
T GetDtmInformation()
{
var result = new T(); // a type defined by you
result.info = dtm.GetInformation();
schemaPaths = result.info.SelectNodes("/FDT/DtmInfo/DtmSchemaPaths/DtmSchemaPath");
foreach (dtmSchemaPath in schemaPaths)
{
CopyFiles(from dtmSchemaPath to frameSchemaPath);
}
// *) read on, more code needed here
foreach (deviceType in result.info)
{
foreach (protocol in deviceType)
{
deviceInfo = dtm.GetDeviceIdentificationInformation(deviceType, protocol);
result.deviceinfo.Add(deviceInfo);
}
}
}
Unfortunately there is a logical bug in the sequence now. Since the DTM was already started, it has already asked the frame application for the schema path (using IFdtContainer::GetXmlSchemaPath()) and it has already set up the schema cache to validate XMLs. The DTM cannot be notified about updates in the schema path.
Therefore you need to restart the DTM in order to be sure that it gets the latest version of XDR schemas. In code, this means you have to update the whole code to:
T GetDtmInformation()
{
var result = new T; // a type defined by you
result.info = dtm.GetInformation();
schemaPaths = result.info.SelectNodes("/FDT/DtmInfo/DtmSchemaPaths/DtmSchemaPath");
schemasUpdated = false;
foreach (dtmSchemaPath in schemaPaths)
{
schemasUpdated |= CopyFiles(from dtmSchemaPath to frameSchemaPath);
}
if (schemasUpdated)
{
// restart the DTM to make sure it uses latest versions of the schemas
dtm = CoCreateInstance(progid);
StartDTMAccordingStateMachine(dtm);
info = dtm.GetInformation("FDT");
}
foreach (deviceType in result.info)
{
foreach (protocol in deviceType)
{
deviceInfo = dtm.GetDeviceIdentificationInformation(deviceType, protocol);
result.deviceinfo.Add(deviceInfo);
}
}
}
XDR schema version information issue
In the chapter before, I have used a simple CopyFiles() operation to update the XDR schema files. This method is not so simple as it seems, because this method needs to perform a version number check.
The version is given in the XDR schema like this:
<AttributeType name="schemaVersion" dt:type="number" default="1.0"/>
The attribute #default defines the version number of the schema. #schemaVersion itself is not used anywhere else.
Version numbers that are used at the time of writing:
1.0 // e.g. FDTCIPCommunicationSchema CIP version 1.1-02
1.1 // e.g. FDTCIPChannelParameterSchema CIP version 1.1-02
1.00 // e.g. DTMIOLinkDeviceSchema IO Link version 1.0-1
1.21 // e.g. FDTIOLinkChannelParameterSchema IO Link version 1.0-1
1.22 // e.g. FDTHART_ExtendedCommunicationSchema
Version 1.21 highly suggests that it correlates to FDT version 1.2.1, which brings up the question on how to interpret the version number. There are three possible ways of interpreting it:
a) as a simple float number as defined in the datatype of XDR (dt:type="number")
b) as a version number in format major.minor
c) as a version number in format major.minorbuild where minor and build are simply concatenated
Ok, I'll leave that puzzle up to the reader. I have suggested a document clarifying this version number issue.
Anyway, this is our CopyFiles() method:
bool CopyFiles(sourceDir, destinationDir)
{
filesCopied = false;
foreach(filename in sourceDir)
{
existingVersion = ExtractVersion(destinationDir + filename);
newVersion = ExtractVersion(sourceDir + filename);
if (newVersion > existingVersion)
{
File.Copy(sourceDir + filename, destinationDir+filenam);
filesCopied = true;
}
}
return filesCopied;
}
XDR schema update impact on other DTMs
In the last chapter we return a flag from CopyFiles() in order to determine whether or not the DTM needs to be restarted in GetDtmInformation(). However, this update may not only affect the current DTM, it may also affect other DTMs of the same protocol which have been added to the catalog before.
While you can simply restart the whole catalog update from scratch, this would imply a huge performance impact. The better way seems to do it selectively.
To apply a selective approach, you need to maintain a list of protocols that were updated (in T GetDtmInformation()):
foreach (dtmSchemaPath in schemaPaths)
{
schemasUpdated = CopyFiles(from dtmSchemaPath to frameSchemaPath);
if (schemasUpdated)
{
listOfChangedProtocols.Add(ExtractProtocolId(destinationDir));
}
}
And of course, don't forget to re-update the catalog for affected DTMs:
affectedDtms = catalog.GetDtmsForProtocols(listOfChangedProtocols);
// TODO: perform catalog update again
// NOTE: remember that this might apply recursively
Getting protocol groups right
Next, you need to know the concept of protocol groups. A protocol group shares XDR schema files across different protocols, where each protocol is identified by a protocol ID. A good example is the CIP protocol family, which consists of the single protocols DeviceNet, CompoNet and Ethernet/IP.
These protocols share a common set of XDR schema files, so you'll find the same file three times on your hard disk. This duplication also has some impact on the catalog update since you need to update all copies even if the DTM comes for a single protocol only.
The reason is in the way a schema cache is constructed: when adding XDR schemas to the schema cache, the first file will win. Other files with the same name will not be added any more. Therefore it is important to ensure that the first file added to the cache is the one with the highest version number. This can only be achieved by updating all copies to the latest version.
This results in an update of the CopyFiles() method:
List<protocolID> CopyFiles(sourceDir, destinationDir)
{
protocolsChanged = new List<protocolID>();
foreach(filename in sourceDir)
{
foreach (subdirectory in destinationDir)
{
files = GetFiles(subdirectory, pattern = filename);
if (files.Count == 1)
{
UpdateXDRConsideringVersionNumber(sourceDir, subdirectory);
protocolsChanged.Add(ExtractProtocolId(subdirectory));
}
}
}
return protocolsChanged;
}
void UpdateXDRConsideringVersionNumber(sourceDir, destinationDir)
{
existingVersion = ExtractVersion(destinationDir + filename);
newVersion = ExtractVersion(sourceDir + filename);
if (newVersion > existingVersion)
{
File.Copy(sourceDir + filename, destinationDir+filenam);
filesCopied = true;
}
}
Device DTMs and schema paths
For whatever reason, it is defined that only communication DTMs and device DTMs need to bring XDR schemas with them. The rationale behind that probably was that you cannot use a device DTM without a communication or gateway DTM.
Unfortunately, when querying the Windows Registry for DTMs, you cannot predict the order in which you get DTMs. This may lead to the case that you get a device DTM first. Starting this DTM and getting information from it may result in errors or invalid XML if there is no XDR schema for the protocol of the DTM yet.
So you need to continue the catalog update, hopefully find a communication DTM or gateway DTM of the same protocol which brings the XDR schemas. Then you start the device DTM again and it will deliver useful information.
This does not need an update to any code. It should already work if you followed all the steps described before. The only thing to consider here is good error handling (but I'll not do that in pseudo code here).
Conclusion
Hopefully I could cover all the topics which are important to know in conjunction with the FDT 1.x catalog update. As you can see, this is not only a rumour.

Connections with Entity Framework and Transient Fault Handling Block?

We're migrating SQL to Azure. Our DAL is Entity Framework 4.x based. We're wanting to use the Transient Fault Handling Block to add retry logic for SQL Azure.
Overall, we're looking for the best 80/20 rule (or maybe more of a 95/5 but you get the point) - we're not looking to spend weeks refactoring/rewriting code (there's a LOT of it). I'm fine re-implementing our DAL's framework but not all of the code written and generated against it anymore than we have to since this is already here only to address a minority case. Mitigation >>> elimination of this edge case for us.
Looking at the possible options explained here at MSDN, it seems Case #3 there is the "quickest" to implement, but only at first glance. Upon pondering this solution a bit, it struck me that we might have problems with connection management since this circumvent's Entity Framework's built-in processes for managing connections (i.e. always closing them). It seems to me that the "solution" is to make sure 100% of our Contexts that we instantiate use Using blocks, but with our architecture, this would be difficult.
So my question: Going with Case #3 from that link, are hanging connections a problem or is there some magic somewhere that's going on that I don't know about?
I've done some experimenting and it turns out that this brings us back to the old "managing connections" situation we're used to from the past, only this time the connections are abstracted away from us a bit and we must now "manage Contexts" similarly.
Let's say we have the following OnContextCreated implementation:
private void OnContextCreated()
{
const int maxRetries = 4;
const int initialDelayInMilliseconds = 100;
const int maxDelayInMilliseconds = 5000;
const int deltaBackoffInMilliseconds = initialDelayInMilliseconds;
var policy = new RetryPolicy<SqlAzureTransientErrorDetectionStrategy>(maxRetries,
TimeSpan.FromMilliseconds(initialDelayInMilliseconds),
TimeSpan.FromMilliseconds(maxDelayInMilliseconds),
TimeSpan.FromMilliseconds(deltaBackoffInMilliseconds));
policy.ExecuteAction(() =>
{
try
{
Connection.Open();
var storeConnection = (SqlConnection) ((EntityConnection) Connection).StoreConnection;
new SqlCommand("declare #i int", storeConnection).ExecuteNonQuery();
//Connection.Close();
// throw new ApplicationException("Test only");
}
catch (Exception e)
{
Connection.Close();
Trace.TraceWarning("Attempted to open connection but failed: " + e.Message);
throw;
}
}
);
}
In this scenario, we forcibly open the Connection (which was the goal here). Because of this, the Context keeps it open across many calls. Because of that, we must tell the Context when to close the connection. Our primary mechanism for doing that is calling the Dispose method on the Context. So if we just allow garbage collection to clean up our contexts, then we allow connections to remain hanging open.
I tested this by toggling the comments on the Connection.Close() in the try block and running a bunch of unit tests against our database. Without calling Close, we jumped up to ~275-300 active connections (from SQL Server's perspective). By calling Close, that number hovered at ~12. I then reproduced with a small number of unit tests both with and without a using block for the Context and reproduced the same result (different numbers - I forget what they were).
I was using the following query to count my connections:
SELECT s.session_id, s.login_name, e.connection_id,
s.last_request_end_time, s.cpu_time,
e.connect_time
FROM sys.dm_exec_sessions AS s
INNER JOIN sys.dm_exec_connections AS e
ON s.session_id = e.session_id
WHERE login_name='myuser'
ORDER BY s.login_name
Conclusion: If you call Connection.Open() with this work-around to enable the Transient Fault Handling Block, then you MUST use using blocks for all contexts you work with, otherwise you will have problems (that with SQL Azure, will cause your database to be "throttled" and ultimately taken offline for hours!).
The problem with this approach is it only takes care of connection retries and not command retries.
If you use Entity Framework 6 (currently in alpha) then there is some new in-built support for transient retries with Azure SQL Database (with a little bit of configuration): http://entityframework.codeplex.com/wikipage?title=Connection%20Resiliency%20Spec
I've created a library which allows you to configure Entity Framework to retry using the Fault Handling block without needing to change every database call - generally you will only need to change your config file and possibly one or two lines of code.
This allows you to use it for Entity Framework or Linq To Sql.
https://github.com/robdmoore/ReliableDbProvider

Failure to create elevation COM object on Windows Seven

I am developing a COM surrogate object in C, it will be used by my applications to call the UAC elevation dialog for certain actions that require administrative rights.
The plan is to make this it export a function that takes a pointer to a function with a variable number of arguments and executes it in a different context. This way, an application can use this object to perform some actions with admin rights, all they need to do is use that object and pass it a pointer to the function that has to be executed with said rights.
This works partially, calling CoCreateInstance goes fine, the function pointer is passed and my function is executed.
However, when I create an instance of this object using the COM Elevation Moniker archive, and Microsoft's sample code for CoCreateInstanceAsAdmin, problems occur.
Here is the code:
HRESULT CoCreateInstanceAsAdmin(HWND hwnd, REFCLSID rclsid, REFIID riid, __out void ** ppv)
{
// Manual implementation of CreateInstanceAsAdmin
CComPtr<IBindCtx> BindCtx;
HRESULT hr = CreateBindCtx(0,&BindCtx);
BIND_OPTS3 bo;
memset(&bo, 0, sizeof(bo));
bo.cbStruct = sizeof(bo);
bo.grfMode = STGM_READWRITE;
bo.hwnd = hwnd;
bo.dwClassContext = CLSCTX_LOCAL_SERVER;
hr = BindCtx->SetBindOptions(&bo);
if (SUCCEEDED(hr))
{
// Use the passed in CLSID to help create the COM elevation moniker string
CComPtr<IMoniker> Moniker;
WCHAR wszCLSID[50];
WCHAR wszMonikerName[300];
StringFromGUID2(rclsid,wszCLSID,sizeof(wszCLSID) / sizeof(wszCLSID[0]));
//Elevation:Administrator!new
hr = StringCchPrintfW(wszMonikerName, sizeof(wszMonikerName)/sizeof(wszMonikerName[0]), L"Elevation:Administrator!new:%s", wszCLSID);
if (SUCCEEDED(hr))
{
// Create the COM elevation moniker
ULONG ulEaten = 0;
ULONG ulLen = (ULONG)wcslen(wszMonikerName);
LPBC pBindCtx = BindCtx.p;
hr = MkParseDisplayName(pBindCtx,wszMonikerName,&ulEaten,&Moniker);
if (SUCCEEDED(hr) && ulEaten == ulLen)
{
// Use passed in reference to IID to bind to the object
IDispatch * pv = NULL;
hr = Moniker->BindToObject(pBindCtx,NULL,riid,ppv);
}
}
}
return hr;
}
Calling CoCreateInstanceAsAdmin fails with "Class not registered".
The object is registered by creating the following registry keys (here's the body of the REG file)
[HKEY_CLASSES_ROOT\COMsurrogate]
#="COMsurrogate Class"
[HKEY_CLASSES_ROOT\COMsurrogate\CurVer]
#="COMsurrogate.1"
[HKEY_CLASSES_ROOT\COMsurrogate\CLSID]
#="{686B6F70-06AE-4dfd-8C26-4564684D9F9F}"
[HKEY_CLASSES_ROOT\CLSID\{686B6F70-06AE-4dfd-8C26-4564684D9F9F}]
#="COMsurrogate Class"
"LocalizedString"="#C:\\Windows\\system32\\COMsurrogate.dll,-101"
"DllSurrogate"=""
[HKEY_CLASSES_ROOT\CLSID\{686B6F70-06AE-4dfd-8C26-4564684D9F9F}\ProgID]
#="COMsurrogate.1"
[HKEY_CLASSES_ROOT\CLSID\{686B6F70-06AE-4dfd-8C26-4564684D9F9F}\VersionIndependentProgID]
#="COMsurrogate"
[HKEY_CLASSES_ROOT\CLSID\{686B6F70-06AE-4dfd-8C26-4564684D9F9F}\InprocServer32]
#="#C:\\windows\system32\COMsurrogate.dll"
"ThreadingModel"="Apartment"
[HKEY_CLASSES_ROOT\CLSID\{686B6F70-06AE-4dfd-8C26-4564684D9F9F}\NotInsertable]
[HKEY_CLASSES_ROOT\CLSID\{686B6F70-06AE-4dfd-8C26-4564684D9F9F}\Programmable]
I suppose that some registry entries are missing - that's the conclusion I reach when reading the error message. However, this list of registry keys was compiled after exploring the documentation on MSDN and other sites - so I am pretty certain that nothing was missed.
Among the things I've tried to solve this is to implement it via ATL (such that registration is automated). That works, but the problem is that I can't pass a funtion pointer to the MIDL generated function prototype.
I tried to pass it using the VARIANT type:
v.vt = VT_PTR;
void (*myptr)(void);
myptr = &DoTheStuff;
v.byref = myptr;
hr = theElevated->CoTaskExecuter(0, v);
as result I get "Invalid argument type".
Could someone shed some light on the subject? Perhaps what I am trying to achieve is not possible by design?
I believe the issues you are having is by design and that the intent of window's security improvements were to help avoid potential security risks.
Microsoft doesn't really want you to elevate your privileges if it can stop you from doing so. Executing arbitrary functions as a privileged user shouldn't be easy in any way if Windows is even a decently secured system. You might could try impersonating a different user using tokens and getting better access that way, but even then it would be a stretch. If I remember right, user impersonations won't even guarantee that you'll get full access. The best solution in this case is just to use the super user account and properly request the correct privileges.

Sharing a DNSServiceRef using kDNSServiceFlagsShareConnection stalls my program

I'm building a client using dns-sd api from Bonjour. I notice that there is a flag called kDNSServiceFlagsShareConnection that it is used to share the connection of one DNSServiceRef.
Apple site says
For efficiency, clients that perform many concurrent operations may want to use a single Unix Domain Socket connection with the background daemon, instead of having a separate connection for each independent operation. To use this mode, clients first call DNSServiceCreateConnection(&MainRef) to initialize the main DNSServiceRef. For each subsequent operation that is to share that same connection, the client copies the MainRef, and then passes the address of that copy, setting the ShareConnection flag to tell the library that this DNSServiceRef is not a typical uninitialized DNSServiceRef; it's a copy of an existing DNSServiceRef whose connection information should be reused.
There is even an example that shows how to use the flag. The problem i'm having is when I run the program it stays like waiting for something whenever I call a function with the flag. Here is the code:
DNSServiceErrorType error;
DNSServiceRef MainRef, BrowseRef;
error = DNSServiceCreateConnection(&MainRef);
BrowseRef = MainRef;
//I'm omitting when I check for errors
error = DNSServiceBrowse(&MainRef, kDNSServiceFlagsShareConnection, 0, "_http._tcp", "local", browse_reply, NULL);
// After this call the program stays waiting for I don't know what
//I'm omitting when I check for errors
error = DNSServiceBrowse(&BrowseRef, kDNSServiceFlagsShareConnection, 0, "_http._tcp", "local", browse_reply, NULL);
//I'm omitting when i check for errors
DNSServiceRefDeallocate(BrowseRef); // Terminate the browse operation
DNSServiceRefDeallocate(MainRef); // Terminate the shared connection
Any ideas? thoughts? suggestion?
Since there are conflicting answers, I dug up the source - annotations by me.
// If sharing...
if (flags & kDNSServiceFlagsShareConnection)
{
// There must be something to share (can't use this on the first call)
if (!*ref)
{
return kDNSServiceErr_BadParam;
}
// Ref must look valid (specifically, ref->fd)
if (!DNSServiceRefValid(*ref) ||
// Most operations cannot be shared.
((*ref)->op != connection_request &&
(*ref)->op != connection_delegate_request) ||
// When sharing, pass the ref from the original call.
(*ref)->primary)
{
return kDNSServiceErr_BadReference;
}
The primary fiels is explained elsewhere:
// When using kDNSServiceFlagsShareConnection, there is one primary _DNSServiceOp_t, and zero or more subordinates
// For the primary, the 'next' field points to the first subordinate, and its 'next' field points to the next, and so on.
// For the primary, the 'primary' field is NULL; for subordinates the 'primary' field points back to the associated primary
The problem with the question is that DNSServiceBrowse maps to ref->op==browse_request which causes a kDNSServiceErr_BadReference.
It looks like kDNSServiceFlagsShareConnection is half-implemented, because I've also seen cases in which it works - this source was found by tracing back when it didn't work.
Service referenses for browsing and resolving may unfortunately not be shared. See the comments in the Bonjour documentation for the kDNSServiceFlagsShareConnection-flag. Since you only browse twice I would just let them have separate service-refs instead.
So both DNSServiceBrowse() and DNSServiceResolve() require an unallocated service-ref as first parameter.
I can't explain why your program chokes though. The first DNSServiceBrowse() call in your example should return immediately with an error code.
Although an old question, but it should help people looking around for answers now.
The answer by vidtige is incorrect, the may be shared for any operation, provided you pass the 'kDNSServiceFlagsShareConnection' flag along with the arguments. Sample below -
m_dnsrefsearch = m_dnsservice;
DNSServiceErrorType mdnserr = DNSServiceBrowse(&m_dnsrefsearch,kDNSServiceFlagsShareConnection,0,
"_workstation._tcp",NULL,
DNSServiceBrowseReplyCallback,NULL);
Reference - http://osxr.org/android/source/external/mdnsresponder/mDNSShared/dns_sd.h#0267

Resources