Quick question: How can I verify the functionality of a custom openSSL engine I'm writing from the command line?
Right now I am following along with this great tutorial, and am successfully able to exercise the engine (which returns a digest value of all 2's) using my test program (source code here, test program located at test/wssha256engine_test.c).
brett#wintermute:~/openssl_ws/wssha256engine$ make test
make[1]: Entering directory
/home/brett/openssl_ws/wssha256engine/test
make[1]: '../bin/test' is up to date.
make[1]: Leaving directory
/home/brett/openssl_ws/wssha256engine/test
brett#wintermute:~/openssl_ws/wssha256engine$ bin/test
Engine Loaded...
Initializing wssha256 engine...
*TEST: Successfuly initialized engine [A test engine for the ws sha256 hardware encryption module, on the Xilinx ZYNQ7000]
init result = 1
Digest NID=672 requested
SHA256 algorithm context initialized
*TEST: Digest INIT 1
SHA256 update
*TEST: Digest Update 1
SHA256 final: sizeof(EVP_MD)= 120
SHA256 cleanup
*TEST: Digest Final 1 Digest size:32
22222222222222222222222222222222
However, because reasons, I would like to also use the openssl command line interface to exercise the engine I just wrote, and have it compute the digest of a random string like in this other tutorial, just using sha256 and not md5.
But when I try and do this, the engine does not load and results in an error telling me that the NID of my digest doesn't exist, and just hashes the string with the standard algorithm instead:
brett#wintermute:~/openssl_ws/wssha256engine$ echo "Hello" | openssl dgst -engine /home/brett/Thesis/openssl_ws/wssha256engine/bin/libwssha256engine.so -sha256
ERROR: Digest is empty! (NID = 0)
engine "wssha256" set.
(stdin)= 66a045b452102c59d840ec097d59d9467e13a3f34f6494e539ffd32c1bb35f18
Why can't I use my engine on the command line, but I can create a C program to load it? And how can I use my engine to compute a digest from the command line?
Note: I am able to load the engine from the command line, just not use it.
After some digging, I was able to figure out why the CLI invocation of my engine did not work, and fix the issue.
The problem was in my digest selector function (original broken code is commented out below). Originally, I returned a failure status from the digest selector function, rather than returning the number of digest IDs that the engine supports.
static int wssha256engine_digest_selector(ENGINE *e, const EVP_MD **digest, const int **nids, int nid)
{
if (!digest)
{
*nids = wssha256_digest_ids;
// THIS ORIGINAL CODE CAUSES THE ENGINE TO NEVER INITIALIZE FROM A CLI INVOCATION
//printf("ERROR: Digest is empty! (NID = %d)\n",nid);
//return FAIL;
// THIS FIXES IT
int retnids = sizeof(wssha256_digest_ids - 1) / sizeof(wssha256_digest_ids[0]);
printf("wssha256engine: digest nids requested...returning [%d]\n",retnids);
return retnids;
}
Reading more closely, we can find that OpenSSL calls the digest selector function in following (separate) ways:
with digest being NULL. In this case, *nids is expected to be assigned a
zero-terminated array of NIDs and the call returns with the number of available NIDs. OpenSSL uses this to determine what digests are supported by this engine.
with digest being non-NULL. In this case, *digest is expected to be assigned the pointer to the EVP_MD structure corresponding to the NID given by nid. The call returns with 1 if the request NID was one supported by this engine, otherwise 0.
So even though my engine supported sha256, the CLI invocation didn't work because it must have not been initialized with a digest before the digest selector function is invoked (no clue as to the order of operations when using the CLI). However when I manually initialized it in my test program, all worked fine. Changing to a non-failure return value to support invocation 1. above fixed the problem.
Related
gitolite info didn't work, adding keys turned them into a no access key and did NOT create a corresponding entry in auth-keys file.
To fix this run gitolite setup on gitolite server
Question: what could have landed me in that mess?
And what does gitolite setup do when invoked for the n-th time (it's no longer setting things up, according to the docs it fixes hooks, but I wonder what the use case would be and which was mine)?
More details on gitolite info
gitolite info command is invoked like so:
> ssh git-user#ser-git
PTY allocation request failed on channel 0
hello git-admin, this is ...#... running gitolite3 3.6.7-2 (Debian) on git 2.17.1
R W some-repository
R W gitolite-admin
R W testing
Connection to ser-git closed.
Bad output is: FATAL: unknown git/gitolite command: 'info'
More details: keys without access.
gitolite sshkeys-lint was showing keys with (no access), now those keys have access as I set them (now meaning after gitolite setup).
ssh-keygen -lf /home/repo/.ssh/authorized_keys | wc -l (or without piped part, regardless) number of keys and their names indicated I didn't have the newest one added.
Similar question that did not work for me: keydir entries not propagating to authorized_keys
Docs pretty much had the answer once I dug deeper, I guess. Which is fairly nice of #sitaramc.
Without options, 'gitolite setup' is a general "fix up everything" command
(for example, if you brought in repos from outside, or someone messed
around with the hooks, or you made an rc file change that affects access
rules, etc.)
Symptoms keys stopped propagating and error FATAL: unknown git/gitolite command: 'info' on ssh git-user#ser-git. Fix was to run gitolite setup. So onto first question, the title one:
what does gitolite setup fix?
gitolite setup is implemented here
my Perl is rather weak, but there's a setup function in line 56. It calls args (which parses options, so here it had nothing to parse), then unless h_only (hooks only arg for setup), which wasn't used, so we skip compile and POST_COMPILE trigger and go for the hooks.
sub setup {
my ( $admin, $pubkey, $h_only, $message ) = args();
unless ($h_only) {
setup_glrc();
setup_gladmin( $admin, $pubkey, $message );
_system("gitolite compile");
_system("gitolite trigger POST_COMPILE");
}
hook_repos(); # all of them, just to be sure
}
package Gitolite::conf::store has hook_repos(), line 228: we change the dir to repo base dir (as per config file), and for each phy_repo we do hook_1(phy_repo). What is a phy_repo? a physical one.
same package, different method and line: hook_1($repo) in line 354.
Method hook_1($repo)
It's quite literally about fixing all the hooks.
Recreates dirs for common and admin hooks.
Rewrites update_hook (common) and post_update_hook (admin).
Sets 755 permissions for both common and admin hooks.
Then using ln_sf it symlinks the folders for common/admin hooks.
ln_sf is in common module, in line 162
I have created a custom PAM module to login to Linux using my custom authentication method.
After my module is done authenticating it receives the actual username and password (plaintext) of the Linux user account from a db.
Now I am trying to set the username and password using:
pam_set_item(pamh, PAM_USER, user) and pam_set_item(pamh, PAM_AUTHTOK, passwd), after doing pam_start("m_pamconf", user, &conv, &pamh).
The "m_pamconf" is my pam configuration file which contains:
auth pam_unix.so nullok_secure use_first_pass
The username is set successfully using this but the password doesn't seem to work, as I am getting a password prompt (which of course should not happen if it takes the password that i have supplied).
Any help is appreciated. Thanks.
I'm getting the password prompt when I use the try_first_pass flag,
if I instead use use_first_pass, the module simply fails and gives the following error in debug logs:
auth could not identify password for [username]
This is my /etc/pam.d/sudo :
#%PAM-1.0
# my pam test
auth requisite pam_test.so
session required pam_env.so readenv=1 user_readenv=0
session required pam_env.so readenv=1 envfile=/etc/default/locale user_readenv=0
#include common-auth
#include common-account
#include common-session-noninteractive
The first line uses the module that I have created and from which I am trying to authenticate the user by doing pam_start("mypamd", user, &conv, &pamh) and then pam_authenticate(pamh, 0) using the auth method of pam_unix.so as specified above in "m_pamconf".
You may need to set the module to sufficient. Having it set to requisite means that it is required to execute, and will fail the stack immediately if it returns false. It does not mean that it will make the stack succeed. According to the docks (http://www.linux-pam.org/Linux-PAM-html/sag-configuration-file.html)
For the simple (historical) syntax valid control values are:
required
failure of such a PAM will ultimately lead to the PAM-API returning failure but only after the remaining stacked modules (for
this service and type) have been invoked.
requisite
like required, however, in the case that such a module returns a failure, control is directly returned to the application or to the
superior PAM stack. The return value is that associated with the first
required or requisite module to fail. Note, this flag can be used to
protect against the possibility of a user getting the opportunity to
enter a password over an unsafe medium. It is conceivable that such
behavior might inform an attacker of valid accounts on a system. This
possibility should be weighed against the not insignificant concerns
of exposing a sensitive password in a hostile environment.
sufficient
if such a module succeeds and no prior required module has failed the PAM framework returns success to the application or to the
superior PAM stack immediately without calling any further modules in
the stack. A failure of a sufficient module is ignored and processing
of the PAM module stack continues unaffected.
optional
the success or failure of this module is only important if it is the only module in the stack associated with this service+type.
include
include all lines of given type from the configuration file specified as an argument to this control. substack
include all lines of given type from the configuration file specified as an argument to this control. This differs from include in
that evaluation of the done and die actions in a substack does not
cause skipping the rest of the complete module stack, but only of the
substack. Jumps in a substack also can not make evaluation jump out of
it, and the whole substack is counted as one module when the jump is
done in a parent stack. The reset action will reset the state of a
module stack to the state it was in as of beginning of the substack
evaluation.
I'm running into a very peculiar and undocumented issue with a GAE Managed VM and Task Queues. I understand that the Managed VM service is in beta, so this question may not be relevant forever, but it's definitely causing me lots of headache now.
The main symptom of the issue is that, in certain (not completely known to me) circumstances, I'm seeing the following error/traceback:
File "/home/vmagent/my_app/some_file.py", line 265, in some_ndb_tasklet
res = yield some_task.add_async('some-task-queue-name')
File "/home/vmagent/python_vm_runtime/google/appengine/ext/ndb/tasklets.py", line 472, in _on_rpc_completion
result = rpc.get_result()
File "/home/vmagent/python_vm_runtime/google/appengine/api/apiproxy_stub_map.py", line 613, in get_result
return self.__get_result_hook(self)
File "/home/vmagent/python_vm_runtime/google/appengine/api/taskqueue/taskqueue.py", line 1948, in ResultHook
rpc.check_success()
File "/home/vmagent/python_vm_runtime/google/appengine/api/apiproxy_stub_map.py", line 579, in check_success
self.__rpc.CheckSuccess()
File "/home/vmagent/python_vm_runtime/google/appengine/ext/vmruntime/vmstub.py", line 312, in _WaitImpl
raise self._ErrorException(*_DEFAULT_EXCEPTION)
RPCFailedError: The remote RPC to the application server failed for call taskqueue.BulkAdd().
I've gone through my local App Engine SDK to trace this through, and I can get up to the last line of the trace, but google/appengine/ext/vmruntime/ doesn't exist on my machine at all, so I have no idea what's happening in vmstub.py. From looking at the local code, some_task.add_async('the-queue') is spinning up an RPC and waiting for it to finish, but this error is not what the except apiproxy_errors.ApplicationError, e: at line 1949 of taskqueue.py is expecting...
The code that's generating the error looks something like this:
#ndb.tasklet
def kickoff_tasks(batch_of_payloads):
for task_payload in batch_of_payloads:
# task_payload is a dict
task = taskqueue.Task(
url='/the/handler/url',
params=payload)
res = yield task.add_async('some-valid-task-queue-name')
Other things worth noting:
this code itself is running in a task handler kicked off by another task.
I first saw this error before implementing this sort of batching, and assumed the issue was because I had added too many tasks from within a task handler.
In some cases, I can run this successfully with a batch size of 100, but in others, it fails consistently (depending on the data in the payloads) at 100, and sometimes succeeds at batch sizes of 50.
The task payloads themselves include batches of items, and are tuned to be just small enough to fit in a task. App Engine advertises a maximum task size of 100KB, so I'm keeping the payloads to under 90,000 bytes right now. Lowering the size even more doesn't seem to help any.
I've also tried implementing an exponential backoff to retry the kickoff_tasks method when this error appears, but it seems that once the error is raised, I can't add any other tasks at all from within the same handler (i.e. I can't kickoff a "continue from where you left off" task, I just have to let this one fail and restart itself)
So, my question is, what is actually causing this error? How can I avoid it, or fix this so that I'm handling it correctly?
This is a known issue that is being worked on. There are actually two issues - the RPC failure itself and the lack of handling of the RPCFailedError exception by the SDK.
There is some public discussion of the issue here.
If you're using App Engine Flexible and the python-compat-multicore image, a new bug popped up related to App Engine using a newer version of the requests library that broke the communication between App Engine Flexible and the datastore. You can fix this error by monkey patching the library in your appengine_config.py file.
Add the following code to appengine_config.py:
try:
import appengine.ext.vmruntime.vmstub as vmstub
except ImportError:
pass
else:
if isinstance(vmstub.DEFAULT_TIMEOUT, (int, long)):
# Newer requests libraries do not accept integers as header values.
# Be sure to convert the header value before sending.
# See Support Case ID 11235929.
vmstub.DEFAULT_TIMEOUT = bytes(vmstub.DEFAULT_TIMEOUT)
Note that if you do not have an appengine_config.py file, you can just create it in your base project directory (wherever you put your app.yaml file). This file gets run during App Engine startup..
I am trying to understand basics of RPC using RPCGen. I followed a basic tutorial and wrote the follwing myrpc.x file
program MESSAGEPROG {
version EVALMESSAGEVERS {
int EVALMESSAGE(string) = 1;
} = 1;
} = 0x20000002;
I compile it by running
rpcgen -a -C myrpc.x
In the resulting server.c file, I added a printf statement as below
printf("Message is: %s,\n", *argp);
Then i run make -f Makefile.myrpc and start the server by running myrpc_server. Now when i run the client 'myrpc_client', I get the following message printed in the server
Message is: H���5�
Now my question is from where does this argument come from "H���5�" as this is not the argument which i am when running the client? Also can someone explain me how do i start running complex programs with rpcgen?
The garbage value is from code on line 15 in client.c, where is uninitialized variable used as an argument for your rpc call. My version of rpc show an error:
call failed: RPC: Can't encode arguments"
15 char * evalmessage_1_arg;
"How do I start running complex programs with rpc?" It' just on you. We cannot say when you need to use rpc. You probably have some reason for what you chose this implementation.
Some use case for rpc is thin client on slow computer, which needs some expensive computation. Client sends data to powerful server, that do the hard work and returns result.
We are using libwebsockets 1.3 in our ssl enabled web socket client program written in c, we are compiling on Centos 6.5 with openssl 1.0.1 installed, making a .so library which is later used in asterisk. The compilation goes fine but I'm getting this runtime error:
problem creating ssl context 336236705: error:140A90A1:lib(20):func(169):reason(161)
Going through libwebsockets code I spotted the part that is generating the error message (lib/ssl.c line 90):
/* basic openssl init */
SSL_library_init();
OpenSSL_add_all_algorithms();
SSL_load_error_strings();
openssl_websocket_private_data_index =
SSL_get_ex_new_index(0, "libwebsockets", NULL, NULL, NULL);
/*
* Firefox insists on SSLv23 not SSLv3
* Konq disables SSLv2 by default now, SSLv23 works
*/
method = (SSL_METHOD *)SSLv23_server_method();
if (!method) {
error = ERR_get_error();
lwsl_err("problem creating ssl method %lu: %s\n",
error, ERR_error_string(error,
(char *)context->service_buffer));
return 1;
}
context->ssl_ctx = SSL_CTX_new(method); /* create context */
if (!context->ssl_ctx) {
error = ERR_get_error();
lwsl_err("problem creating ssl context %lu: %s\n",
error, ERR_error_string(error,
(char *)context->service_buffer));
return 1;
}
Which according to examples I've seen on the web looks absolutely fine, I've been scratching my head, searching and trying everything for the past couple of days including reinstalling different versions of openssl, changing the code above, replacing SSLv23_server_method with other methods, etc... but can't get it to work, does anybody know where the problem might be?
Additional informaiton:
Using ERR_print_errors_fp() I get:
3077879544:error:140A90A1:lib(20):func(169):reason(161):ssl_lib.c:1802:
part of our code that calls libwebsocket_create_context looks like this:
int opts = 0;
const char *interface = NULL;
int listen_port;
memset(&wsInfo, 0, sizeof wsInfo);
listen_port = CONTEXT_PORT_NO_LISTEN;
wsInfo.port = listen_port;
wsInfo.iface = interface;
wsInfo.protocols = protocols;
wsInfo.extensions = libwebsocket_get_internal_extensions();
wsInfo.gid = -1;
wsInfo.uid = -1;
wsInfo.options = opts;
wsContext = libwebsocket_create_context(&wsInfo);
The program is compiled into an .so library and the library is used in our modified version of asterisk (which itself uses openssl as far as I know).
problem creating ssl context 336236705: error:140A90A1:lib(20):func(169):reason(161)
This may have helped:
$ openssl errstr 0x140A90A1
error:140A90A1:SSL routines:SSL_CTX_new:library has no ciphers
"library has no ciphers" is a sure sign the library was not initialized. See OpenSSL's wiki page on intializing the library at Library Initialization.
Since Asterisk is doing really clever things, you should check what else its doing. In particular, you should ensure its not using weak/wounded/broken protocols and cipher suites. An example of how to improve a security posture can be found at SSL/TLS Client. The sample ensure TLS 1.0 and above, and uses "strong" cipher suites.
I got this error too by using a library that used the boost asio.
The lib was compiled against openssl-1.0, while my binary was compiled against openssl-1.1.
Switching my binary to also use openssl-1.0 solved the issue for me.
The problem is asterisk overrides all openssl initialization functions including SSL_library_init() and OpenSSL_add_all_algorithms() in main\libasteriskssl.c and replaces them with dummy functions that do nothing, instead it defines an ast_ssl_init() which does all the initializations and is called once in main() in main/asterisk.c, my code happened to be before that call.
Too long for a comment, but:
First things first, let's eliminate your code. In the libwebsockets distribution, in test/test-server.c there is a test server that works with SSL. Does that work? If so, I'm guessing it's something you are doing in your code (in which case we are going to need some of your code). If not, I'm guessing it's your distribution.
Next, let's make that error message a bit more informative. Can you introduce ERR_print_errors_fp() to print SSL errors to stderr or similar, and tell us what it says?