Start a second core with PSCI on QEMU - arm

Good day,
I am currently writing my first boot-loader for the linux operating system for the cortex a72 processor using qemu.
Since I am doing some complex calculations (CRC32, SHA256, signature check...) the code takes quite some time to execute on one core. So I have decided to use the power of more than 1 core, to speed up computation, and have some parallelism.
From the research that I have conducted I found that I have to start up my second core using the PSCI protocole, and taking a look at the Device tree revealed the configuration of my psci. Here is some relevant information from the .dtb:
psci {
migrate = <0xc4000005>;
cpu_on = <0xc4000003>;
cpu_off = <0x84000002>;
cpu_suspend = <0xc4000001>;
method = "hvc";
compatible = "arm,psci-1.0", "arm,psci-0.2", "arm,psci";
};
cpu#0 {
phandle = <0x00008004>;
reg = <0x00000000>;
enable-method = "psci";
compatible = "arm,cortex-a72";
device_type = "cpu";
};
cpu#1 {
phandle = <0x00008003>;
reg = <0x00000001>;
enable-method = "psci";
compatible = "arm,cortex-a72";
device_type = "cpu";
};
So now it seems like there is all the needed information to start the second core.
However I am not sure how to use the hvc method to do that having found examples that only explain the smc method.
Furthermore, how should I implement concurrency and parallelism?
The basic usage I would wish to achieve is to have my CRC32 algorithm running on one core, and my SHA256 running on the other. Would the two cores be able to access the common UART? I guess that there is some way to know that a core has finished executing a program, so race conditions should not be hard to catch.
It would be of great help if anyone could provide any guidance to implement this feature.
Thanks in advance!

The official documentation of how to use PSCI calls is in the Arm Power State Coordination Interface Platform Design Document. But the short answer is that the only difference between the HVC method and the SMC method is that for one you use the "HVC" instruction and for the other the "SMC" instruction -- everything else is identical.

Related

BlueNRG Bluetooth: read central device name

I'm using the STM BlueNRG-MS chip on my peripheral device and after connection I'd like to read the name of the connected central device (android phone).
I thought I can do this directly in my user_notify routine which is registered as hci callback
/* Initialize the Host-Controller Interface */
hci_init(user_notify, NULL);
So on the EVT_LE_CONN_COMPLETE event, I take the provided handle for the central device and I use aci_gatt_read_using_charac_uuid() to read what I thought is the characteristic with the device name (uuid 0x2a00).
case EVT_LE_META_EVENT:
{
evt_le_meta_event *evt = (void *)event_pckt->data;
switch(evt->subevent){
case EVT_LE_CONN_COMPLETE:
{
evt_le_connection_complete *cc = (void *)evt->data;
GAP_ConnectionComplete_CB(cc->peer_bdaddr, cc->handle);
uint16_t uuid = 0x2a00;
resp = aci_gatt_read_using_charac_uuid(cc->handle, 0, 1, UUID_TYPE_16, (uint8_t*)&uuid);
LOG("GATT read status: %d", resp);
enqueEvent(EVENT_BLE_CONNECTED);
}
break;
}
}
Long story short, it doesn't work. First thing I'm not sure about is, what is the start_handle and end_handle parameter of aci_gatt_read_using_charac_uuid(), it returns ERR_INVALID_HCI_CMD_PARAMS.
Can someone shed some light here?
update
What also puzzles me is that the function aci_gatt_read_using_charac_uuid() is nowehere referenced in the BlueNRG-MS Programming Guidelines.
update2
I changed the function call to aci_gatt_read_using_charac_uuid(cc->handle, 0x0001, 0xffff, UUID_TYPE_16, (uint8_t*)&uuid); but I still get the ERR_INVALID_HCI_CMD_PARAMS. What which paramter could even be invalid? The uuid exists, I can read the device name if I use the BlueNRG GUI with a bluetooth dongle.
update3
Has anyone ever used this function or somehow managed to read a characteristic from a central device? I'd highly appreciate any help or hint.
Here you go, The BlueNRG-MS Bluetooth® LE stack application command interface (ACI) - User manual
page 75 - 4.6.25 Aci_Gatt_Read_Charac_Using_UUID()
and make sure you have called Aci_Gatt_Init()
The user manual is last revised July 2019, the document you link to is from 2018, don't know if this is why ?
The start_handle and end_handle is the range of handles in your service as pictured here -
Here is a discussion to the closest thing I could find to match your question.
As it turned out there are two bugs in the BlueNRG API.
In bluenrg_aci_const.h file the OCF code OCF_GATT_READ_USING_CHARAC_UUID shall be 0x119 instead of 0x109.
And in the implementation of the aci_gatt_read_using_charac_uuid() function, there is a missing setting on event:
rq.event = EVT_CMD_STATUS;
Patching them fixed the issue.

How can I abort a Gatling simulation if the test system is not in the right state?

The target system I am load testing has a mode that indicates if it's suitable for running a load test against.
I want to check that mode once only at the beginning of my simulation (i.e. I don't want to do the check over and over for each user in the sim).
This is what I've come up with, but System.exit() seems pretty harsh.
I define an execution chain that checks if the mode is the value I want:
def getInfoCheckNotRealMode:ChainBuilder = exec(
http("mode check").get("/modeUrl").
check( jsonPath("$.mode").saveAs("mode") )
).exec { sess =>
val mode = sess("mode").as[String]
println(s"sengingMode $mode")
if( mode == "REAL"){
log.error("cannot allow simulation to run against system in REAL mode")
System.exit(1)
}
sess
}
Then I run the "check" scenario in parallel to the real scenario like this:
val sim = setUp(
newUserScene.inject(loadProfile).
protocols(mySvcHttp),
scenario("Check Sending mode").exec(getInfoCheckNotRealMode).
inject(atOnceUsers(1)).
protocols(mySvcHttp)
)
Problems that I see with this:
Seems a bit over-complicated for simply checking that the system-under-test is suitable for testing against.
It's going to actually run the scenarios in parallel so if the check takes a while it's still going to generate load against a system that's in the wrong mode.
Need to consider and test what happens if the mode check is not behaving correctly
Is there a better way?
Is there some kind of "before simulation begins" phase where I can put this check?

Move graph trained the GPU to be tested on the CPU

So I have this CNN which I train on the GPU. During the training, I regularly save checkpoint.
Later on, I want to have a small script that reads .meta file and the checkpoint and do some tests on a CPU. I use the following the code:
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
with sess.as_default():
with tf.device('/cpu:0'):
saver = tf.train.import_meta_graph('{}.meta'.format(model))
saver.restore(sess,model)
I keep getting this error which tell me that the saver is trying to put the operation on the GPU.
How can i change that?
Move all the ops to CPU using _set_device API. https://github.com/tensorflow/tensorflow/blob/r1.14/tensorflow/python/framework/ops.py#L2255
with tf.Session() as sess:
g = tf.get_default_graph()
ops = g.get_operations()
for op in ops:
op._set_device('/device:CPU:*')
Hacky work-around, open your graph definition file (ending with .pbtxt), and remove all lines starting with device:
For programmatic approach you can see how TensorFlow exporter does this with clear_devices although that uses regular Saver, not meta graph exporter

Implementation of time in Zynq

I'm trying to do a simple STANDALONE application for Zynq. I want to use the 'time.h' to manipulate date/time. I know that there is no hardware implementation on a stanalone BSP, but I want to wire it up on my own.
During compilation, when I call 'time(NULL)' I get a error, that there is no implementation of '_gettimeofday()'. I've found it in and implemented it according to the function definition, so that the errors disappear and everything looks ok, but when I run my project on hardware, I see only zeroes from time().
Can anybody help?
Regards,
G2
Ok, I've done some research, and found this link. This is almost what I'v been searching, but instead of '_times()' I needed '_gettimeofday()' and this is my implementation:
int _gettimeofday(struct timeval *__p, void *__tz)
{
__p->tv_sec = (systemUsCounter / 1000000);
__p->tv_usec = systemUsCounter;
return 0;
}
I left the '__tz' pointer with no chainges.
So this is basicly how to utilize 'time.h' in a standalone application on Zynq.

Microsoft Crypto API Disable Use of RSAES-OAEP Key Transport Algorithm

I'm using CryptEncryptMessage to generate a PKCS#7 enveloped message. I'm using szOID_NIST_AES256_CBC as the encryption algorithm.
The generated message appears to be valid but is the RSAES-OAEP for the Key Transport Algorithm which has limited support in the wild (Thunderbird, OpenSSL SMIME Module among many others don't support it).
I'll like for CAPI to revert to the older RSAencryption for key transport.
Is there any possible way to do that, I could revert to the low level messaging functions if there is a way rather than to use CryptEncryptMessage but I can't find a way to do that even using the low level functions.
Code:
CRYPT_ENCRYPT_MESSAGE_PARA EncryptMessageParams;
EncryptMessageParams.cbSize = sizeof(CMSG_ENVELOPED_ENCODE_INFO);
EncryptMessageParams.dwMsgEncodingType = PKCS_7_ASN_ENCODING;
EncryptMessageParams.ContentEncryptionAlgorithm.pszObjId = szOID_NIST_AES256_CBC;
EncryptMessageParams.ContentEncryptionAlgorithm.Parameters.cbData = 0;
EncryptMessageParams.ContentEncryptionAlgorithm.Parameters.pbData = 0;
EncryptMessageParams.hCryptProv = NULL;
EncryptMessageParams.pvEncryptionAuxInfo = NULL;
EncryptMessageParams.dwFlags = 0;
EncryptMessageParams.dwInnerContentType = 0;
BYTE pbEncryptedBlob[640000];
DWORD pcbEncryptedBlob = 640000;
BOOL retval = CryptEncryptMessage(&EncryptMessageParams, cRecipientCert, pRecipCertContextArray, pbMsgText, dwMsgTextSize, pbEncryptedBlob, &pcbEncryptedBlob);
The Key Transport Algorithm is a bit tricky to handle, and it may not serve its purpose (I see you noted that you'd like CAPI to support RSAencryption; trust me, I would too). It looks like you've alaready detected the bulk of your problem - The generated message appears is valid, but your method makes it necessary to use CryptEncryptMessage, which won't work well/at all in the long run.
Step 1 - Examine the Code
CRYPT_ENCRYPT_MESSAGE_PARA EncryptMessageParams;
EncryptMessageParams.cbSize = sizeof(CMSG_ENVELOPED_ENCODE_INFO);
EncryptMessageParams.dwMsgEncodingType = PKCS_7_ASN_ENCODING;
EncryptMessageParams.ContentEncryptionAlgorithm.pszObjId = szOID_NIST_AES256_CBC;
EncryptMessageParams.ContentEncryptionAlgorithm.Parameters.cbData = 0;
EncryptMessageParams.ContentEncryptionAlgorithm.Parameters.pbData = 0;
EncryptMessageParams.hCryptProv = NULL;
EncryptMessageParams.pvEncryptionAuxInfo = NULL;
EncryptMessageParams.dwFlags = 0;
EncryptMessageParams.dwInnerContentType = 0;
BYTE pbEncryptedBlob[640000];
DWORD pcbEncryptedBlob = 640000;
BOOL retval = CryptEncryptMessage(&EncryptMessageParams, cRecipientCert, pRecipCertContextArray, pbMsgText, dwMsgTextSize, pbEncryptedBlob, &pcbEncryptedBlob);
Pretty basic, isn't it? Although efficient, it's not really getting the problem done. If you look at this:
EncryptMessageParams.dwFlags = 0;
EncryptMessageParams.dwInnerContentType = 0;
you will see that it is pre-defined, but used only in the definition of retval. However, I could definitely see this as a micro-optimization, and not really useful if we're going to re-write the code. However, I've outlined the basic steps blow to integrate this without a total re-do of the code (so you can keep on using the same parameters):
Step 2 - Editing the Parameters
As #owlstead mentioned in his comments, the Crypto API is not very user-friendly. However, you've done a great job with limited resources. What you'll wanna add is a Cryptographic Enumeration Provider to help narrow down the keys. Make sure you have either Microsoft Base Cryptographic Provider version 1.0 or Microsoft Enhanced Cryptographic Provider version 1.0 to use these efficiently. Otherwise, you'll need to add in the function like so:
DWORD cbName;
DWORD dwType;
DWORD dwIndex;
CHAR *pszName = NULL;
(regular crypt calls here)
This is mainly used to prevent the NTE_BAD_FLAGS error, although technically you could avoid this with a more low-level declaration. If you wanted, you could also create a whole new hash (although this is only necessary if the above implementation won't scale to the necessary factor of time/speed):
DWORD dwBufferLen = strlen((char *)pbBuffer)+1*(0+5);
HCRYPTHASH hHash;
HCRYPTKEY hKey;
HCRYPTKEY hPubKey;
BYTE *pbKeyBlob;
BYTE *pbSignature;
DWORD dwSigLen;
DWORD dwBlobLen;
(use hash as normal w/ crypt calls and the pbKeyBlobs/Signatures)
Make sure to vaildate this snippet before moving on. You can do so easily like so:
if(CryptAcquireContext(&hProv, NULL, NULL, PROV_RSA_FULL, 0)) {
printf("CSP context acquired.\n");
}
If you're documenting or releasing, might want to add a void MyHandleError(char *s) to catch the error so someone who edits but fails can catch it quickly.
By the way, the first time you run it you'll have to create a new set because there's no default. A nice one-liner that can be popped into an if is below:
CryptAcquireContext(&hCryptProv, NULL, NULL, PROV_RSA_FULL, CRYPT_NEWKEYSET)
Remember that syncing server resources will not be as efficient as doing the re-work I suggested in the first step. This is what I will be explaining below:
Step 3 - Recode and Relaunch
As a programmer, re-coding might seem like a waste of time, but it can definitely help you out in the long run. Remember that you'll still have to code in the custom params when encoding/syncing; I'm not going to hand-feed you all the code like a baby. It should be well sufficient to show you the basic outlines.
I'm definitely assuming that you're trying to handle to the current user's key container within a particular CSP; otherwise, I don't really see the use of this. If not, you can do some basic edits to suit your needs.
Remember, we're going to bypass CryptEncryptMessage by using CryptReleaseContext, which directly releases the handle acquired by the CryptAcquireContext function. Microsoft's standard on the CAC is below:
BOOL WINAPI CryptAcquireContext(
_Out_ HCRYPTPROV *phProv,
_In_ LPCTSTR pszContainer,
_In_ LPCTSTR pszProvider,
_In_ DWORD dwProvType,
_In_ DWORD dwFlags
);
Note that Microsoft's scolding you if you're using a user interface:
If the CSP must display the UI to operate, the call fails and the NTE_SILENT_CONTEXT error code is set as the last error. In addition, if calls are made to CryptGenKey with the CRYPT_USER_PROTECTED flag with a context that has been acquired with the CRYPT_SILENT flag, the calls fail and the CSP sets NTE_SILENT_CONTEXT.
This is mainly server code, and the ERROR_BUSY will definitely be displayed to new users when there are multiple connections, especially those with a high latency. Above 300ms will just cause a NTE_BAD_KEYSET_PARAM or similar to be called, due to the timeout without even a proper error being received. (Transmission problems, anyone with me?)
Unless you're concerned about multiple DLL's (which this doesn't support due to NTE_PROVIDER_DLL_FAIL errors), the basic set up to grab crypt services clientside would be as below (copied directly from Microsoft's examples):
if (GetLastError() == NTE_BAD_KEYSET)
{
if(CryptAcquireContext(
&hCryptProv,
UserName,
NULL,
PROV_RSA_FULL,
CRYPT_NEWKEYSET))
{
printf("A new key container has been created.\n");
}
else
{
printf("Could not create a new key container.\n");
exit(1);
}
}
else
{
printf("A cryptographic service handle could not be "
"acquired.\n");
exit(1);
}
However simple this may seem, you definitely don't want to get stuck passing this on to the key exchange algorithm (or whatever else you have handling this). Unless you're using symmetric session keys (Diffie-Hellman/KEA), the exchange keypair can be used to encrypt session keys so that they can be safely stored and exchanged with other users.
Someone named John Howard has written a nice Hyper-V Remote Management Configuration Utility (HVRemote) which is a large compilation of the techniques discussed here. In addition to using the basic crypts and keypairs, they can be used to permit ANONYMOUS LOGON remote DCOM access (cscript hvremote.wsf, to be specific). You can see many of the functions and techniques in his latest crypts (you'll have to narrow the query) on his blog:
http://blogs.technet.com/b/jhoward/
If you need any more help with the basics, just leave a comment or request a private chat.
Conclusion
Although it's pretty simple once you realize the basic server-side methods for hashing and how the client grabs the "crypts", you'll be questioning why you even tried the encryption during transmits. However, without the crypting clientside, encrypts would definitely be the only secure way to transmit what was already hashed.
Although you might argue that the packets could be decrypted and hashed off the salts, consider that both in-outgoing would have to be processed and stored in the correct timing and order necessary to re-hash clientside.

Resources