I am working on a game written in C using SDL. Given that it already uses SDL, SDL_image, and SDL_ttf, I decided to add SDL_mixer and SDL_net to my engine. Getting SDL_mixer set up and working was very easy, but I am having a lot of trouble with SDL_net.
To test I created a very simple application with the following rules:
Run without arguments act as a TCP server on port 9999
Run with an argument try to connect to the server at the given IP address on port 9999
Here are some of the key lines of the program (I'm not going to post my whole event-driven SDL engine because its too long):
char *host = NULL;
if (argc > 1) host = argv[1];
and...
IPaddress ip;
TCPsocket server = NULL;
TCPsocket conn = NULL;
if (host) { /* client mode */
if (SDLNet_ResolveHost(&ip,host,port) < 0)
return NULL; //this is actually inside an engine method
if (!(conn = SDLNet_TCP_Open(&ip)))
return NULL;
} else { /* server mode */
if (SDLNet_ResolveHost(&ip,NULL,port) < 0)
return NULL;
if (!(server = SDLNet_TCP_Open(&ip)))
return NULL;
}
and... inside the event loop
if (server) {
if (!conn)
conn = SDLNet_TCP_Accept(server);
}
if (conn) {
void *buf = malloc(size); //server, conn, size are actually members of a weird struct
while (SDLNet_TCP_Recv(conn,buf,size))
onReceive(buf); //my engine uses a callback system to handle things
free(buf);
}
The program seems to start up just fine. However for some reason when I run it in client mode trying to connect to my home computer (which I have on a different IP) from my laptop I find that the call to SDLNet_TCP_Open blocks the program for awhile (5-10 seconds) then returns NULL. Can anybody see what I did wrong? Should I post more of the code? Let me know.
Related
I'm using an mbed enabled board for development, I need to run an Ethernet application on it.
I tried to create a connection by using the following code:
network = new EthernetInterface();
network->connect();
// Show the network address
const char *ip = network->get_ip_address();
printf("IP address is: %s\n", ip ? ip : "No IP");
Normally it should work, But it fails in the LWIPInteraface class's bringup API at osSemaphoreAcquire, returning a timeout error.
if (!netif_is_link_up(&netif)) {
if (blocking){
if (osSemaphoreAcquire(linked, 15000) != osOK){
if (ppp){
(void) ppp_lwip_disconnect(hw);
}
return NSAPI_ERROR_NO_CONNECTION;
}
}
}
Any reason why I might be getting a timeout from osSemaphoreAcquire?
I tried increasing the timeout too but in vain.
If someone could help me with it, would be appreciated.
Thanks in advance.
On the Eclipse Paho MQTT website, the developers provide a client example (http://www.eclipse.org/paho/files/mqttdoc/MQTTClient/html/pubsync.html) that does the following:
Create a client object with the specified parameters
Connect the client with the specified connection options
Publish a MQTT message
Disconnect the client
Destroy the client object
This works well if all you want is to publish one single message.
In my code, I have a function that contains pretty much the same code as in the aforementioned example, however, the function is called repeatedly from main() as I need to publish a large number of messages one after another. The problem is, if I use the code exactly as in the example, every time my function is called a new connection is created and shortly after destroyed. This happens again and again and again as long as the function is called repeatedly, causing a huge overhead.
Is there a way to check whether a client object has already been created, and if so, don't do it again but use the existing one?
In my understanding, the MQTTClient_isConnected() function is supposed to do that: https://www.eclipse.org/paho/files/mqttdoc/MQTTClient/html/_m_q_t_t_client_8h.html#ad9e40bdb7149ee3e5d075db7f51a735f
But if I try it like this, I get a Segmentation fault:
if (!MQTTClient_isConnected(client)) {
MQTTClient_create(&client, mqtt.addr, CLIENT_ID, MQTTCLIENT_PERSISTENCE_NONE, NULL);
conn_opts.keepAliveInterval = 20;
conn_opts.cleansession = 1;
conn_opts.username = TOKEN;
if (MQTTClient_connect(client, &conn_opts) != MQTTCLIENT_SUCCESS) {
printf("\n==> Connection to MQTT Broker failed.\n");
MQTTClient_destroy(&client);
exit(EXIT_FAILURE);
}
}
[EDIT]
Here is a simple demo code that better illustrates what I'm trying to accomplish:
#include <stdio.h>
#include <MQTTClient.h>
MQTTClient client;
void publish_MQTT() {
MQTTClient_connectOptions conn_opts = MQTTClient_connectOptions_initializer;
MQTTClient_message pubmsg = MQTTClient_message_initializer;
MQTTClient_deliveryToken token;
char *payload = (char *)calloc(1024, sizeof(char));
strcpy(payload, "hello");
printf("DEBUG_BEFORE >> MQTTClient_isConnected(client) = %d\n", MQTTClient_isConnected(client)); // DEBUG OUTPUT
if (!MQTTClient_isConnected(client)) {
MQTTClient_create(&client, addr, CLIENT_ID, MQTTCLIENT_PERSISTENCE_NONE, NULL);
conn_opts.keepAliveInterval = 20;
conn_opts.cleansession = 1;
conn_opts.username = TOKEN;
if (MQTTClient_connect(client, &conn_opts) != MQTTCLIENT_SUCCESS) {
fprintf(stderr, RED "\n==> Connection to MQTT Broker failed.\n" RESET_CL);
MQTTClient_destroy(&client);
free(payload);
exit(EXIT_FAILURE);
}
}
printf("DEBUG_AFTER >> MQTTClient_isConnected(client) = %d\n", MQTTClient_isConnected(client)); // DEBUG OUTPUT
pubmsg.payload = payload;
pubmsg.payloadlen = strlen(payload);
pubmsg.qos = QOS;
pubmsg.retained = 0;
MQTTClient_publishMessage(client, TOPIC, &pubmsg, &token);
MQTTClient_waitForCompletion(client, token, TIMEOUT);
//MQTTClient_disconnect(client, 10000);
//MQTTClient_destroy(&client);
free(payload);
}
int main(void) {
for (i=0; i<1000; i++) {
publish_MQTT();
}
return 0;
}
Please ignore the fact that the addr parameter is never specified (in my real code it is) or that it is pretty useless specifying a message in the publish_MQTT() function (in my real code, data is passed from main() to that function).
I figured it out: Apparently, there is absolutely nothing wrong with the example codes in the original posting.
It turns out I was appending the port of the MQTT server to the addr parameter again and again (in a section of the code not shown here as I didn't suspect the source of the error to be there), every time the publish_MQTT() function was called. This made the addr char string grow and eventually exceed the specified length, thus causing the SegFault.
This way everything works just as intended:
printf("\nADDR = %s\n\n", addr); // DEBUG OUTPUT
if (!MQTTClient_isConnected(client)) {
strcat(strcat(addr, ":"), pt); // This line needed to be placed here, not before that if block
MQTTClient_create(&client, addr, CLIENT_ID, MQTTCLIENT_PERSISTENCE_NONE, NULL);
conn_opts.keepAliveInterval = 20;
conn_opts.cleansession = 1;
conn_opts.username = TOKEN;
if (MQTTClient_connect(client, &conn_opts) != MQTTCLIENT_SUCCESS) {
printf("\n==> Connection to MQTT Broker failed.\n");
MQTTClient_destroy(&client);
free(payload);
exit(EXIT_FAILURE);
}
}
Probably you are setting up "clean session flag", what's mean: "
If the ClientId represents a Client already connected to the Server then the Server MUST disconnect the existing Client [MQTT-3.1.4-2]." (from mqtt standard). So you client is disconnected (the existing one).
Code from example seems to be reasobable. It looks like there is problem with passing function argument. For example if function needed address, and you are giving objects itself.
Morze from standard:
"3.2.2.2 Session Present
Position: bit 0 of the Connect Acknowledge Flags.
If the Server accepts a connection with CleanSession set to 1, the Server MUST set Session Present to 0 in the CONNACK packet in addition to setting a zero return code in the CONNACK packet [MQTT-3.2.2-1].
If the Server accepts a connection with CleanSession set to 0, the value set in Session Present depends on whether the Server already has stored Session state for the supplied client ID. If the Server has stored Session state, it MUST set Session Present to 1 in the CONNACK packet [MQTT-3.2.2-2]. If the Server does not have stored Session state, it MUST set Session Present to 0 in the CONNACK packet. This is in addition to setting a zero return code in the CONNACK packet".
Having strange connection failure with openssl SSLConnect with my SSLCLIENT.
We are trying to establish ssl connection with a server. We notice that SSL_CONNECT is failing with error code "SSL_ERROR_SYSCALL".
For further depth we tried printing strerror(errno) which return "scuccess" "0".
However i am just trying to understand what might be the exact cause for this issue
Added code snippet for SSL init and connect::
request some guidance:
int setupSSL(int server){
int retVal = 0;
int errorStatus = 0;
int retryMaxCount = 6;
static int sslInitContext=0;
if(sslInitContext == 0)
{
if(InitCTX() != 0)
{
return -1;
}
else
{
sslInitContext=1;
}
}
retVal = SSL_set_fd(ssl, server); /* attach the socket descriptor */
if ( retVal != 1 )
{
/* perform the connection */
sprintf(debugBuf,"SYSTEM:SOCKET:Could not set ssl FD: %d %s\n",retVal,strerror(retVal));
debug_log(debugBuf,TRACE_LOG);
CloseSocket(server);
return -1;
}
do
{
retVal = SSL_connect(ssl);
errorStatus = SSL_get_error (ssl, retVal);
switch (errorStatus)
{
case SSL_ERROR_NONE:
retVal = 0;
break;
case SSL_ERROR_WANT_READ:
case SSL_ERROR_WANT_WRITE:
retVal = 1;
break;
default:
sprintf(debugBuf,"SYSTEM:SSL_SOCKET:Could not build SSL session(Other error): %d %s\n",errorStatus,strerror(errno));
debug_log(debugBuf,TRACE_LOG);
CloseSocket(server);
return -1;
}
sprintf(debugBuf,"SYSTEM:SSL_SOCKET:SSL CONNECTION Under PROGRESS: %d with remaining retries %d\n",errorStatus,retryMaxCount);
debug_log(debugBuf,TRACE_LOG);
if (retVal)
{
struct timeval tv;
fd_set sockReadSet;
tv.tv_sec = 2;
tv.tv_usec = 0;
FD_ZERO(&sockReadSet);
FD_CLR(server, &sockReadSet);
FD_SET(server,&sockReadSet);
retVal = select(server+1, &sockReadSet, NULL, NULL, &tv);
if (retVal >= 1)
{
retVal = 1;
}
else
{
retVal = -1;
}
retryMaxCount--;
if (retryMaxCount <= 0 )
break;
}
}while(!SSL_is_init_finished (ssl) && retVal == 1);
cert = SSL_get_peer_certificate(ssl);
if(cert == NULL)
{
debug_log("SYSTEM:SSL_SOCKET:Unable to retrive server certificate\n",TRACE_LOG);
CloseSocket(server);
return -1;
}
if(SSL_get_verify_result(ssl)!=X509_V_OK)
{
debug_log("SYSTEM:SSL_SOCKET:Certificate doesn't verify\n",TRACE_LOG);
CloseSocket(server);
return -1;
}
/*X509_NAME_get_text_by_NID (X509_get_subject_name (cert), NID_commonName, peer_CN, 256);
if(strcasecmp(peer_CN, cnName)){
debug_log("SYSTEM:SSL_SOCKET:Common name doesn't match host name\n",TRACE_LOG);
return -1;
}*/
return 0;
// LoadCertificates(ctx, CertFile, KeyFile);
}
int InitCTX(void)
{
int errorStatus = 0;
static int isSslInit = 1;
if(isSslInit)
{
OpenSSL_add_all_algorithms();/* Load cryptos, et.al. */
SSL_load_error_strings();/* Bring in and register error messages */
if(SSL_library_init() < 0)
{
debug_log("SYSTEM:SSL_SOCKET:Could not initialize the OpenSSL library\n",TRACE_LOG);
return -1;
}
method = TLSv1_client_method();
isSslInit=0;
}
ctx = SSL_CTX_new(method);/* Create new context */
if ( ctx == NULL)
{
debug_log("SYSTEM:SSL_SOCKET:Unable to create a new SSL context structure\n",TRACE_LOG);
//sprintf(debugBuf,"SYSTEM:SSL_SOCKET:Unable to create a new SSL context structure: %d %s\n",errorStatus,strerror(retVal));
//debug_log(debugBuf,TRACE_LOG);
return -1;
}
SSL_CTX_set_options(ctx, SSL_OP_NO_SSLv2);
if (SSL_CTX_use_certificate_file(ctx,CertFile, SSL_FILETYPE_PEM) <= 0)
{
SSL_CTX_free(ctx);
ctx = NULL;
debug_log("SYSTEM:SSL_SOCKET:Error setting the certificate file.\n",TRACE_LOG);
return -1;
}
/* Set the list of trusted CAs based on the file and/or directory provided*/
if(SSL_CTX_load_verify_locations(ctx,CertFile,NULL)<1)
{
SSL_CTX_free(ctx);
ctx = NULL;
debug_log("SYSTEM:SSL_SOCKET:Error setting verify location.\n",TRACE_LOG);
return -1;
}
SSL_CTX_set_verify(ctx,SSL_VERIFY_PEER,NULL);
SSL_CTX_set_timeout (ctx, 300);
ssl = SSL_new(ctx); /* create new SSL connection state */
if(ssl == NULL)
{
sprintf(debugBuf,"SYSTEM:SOCKET:SSL:Unable to create SSL_new context\n");
debug_log(debugBuf,DEBUG_LOG);
if(ctx != NULL)
SSL_CTX_free(ctx);
return -1;
}
return 0;
}
Also is it advised to maintain SSL context for new connections or should we destroy and re init the ssl context??
Added PCAP info:
https://drive.google.com/file/d/0B60pejPe6yiSUk1MMmI1cERMaFU/view?usp=sharing
client: 198.168.51.10 (198.168.51.10), Server: 192.168.96.7 (192.168.96.7)
We are trying to establish ssl connection with a server. We notice that SSL_CONNECT is failing with error code "SSL_ERROR_SYSCALL".
This is usually the case if the other side is simply closing the connection. Microsoft SChannel does this on many kind of handshake problems instead of sending a TLS alert back. This can happen for problems like invalid protocol or no common ciphers etc. It also can happen if you try to do a TLS handshake with a server which does not speak TLS at all on this port. Look at logs at the server side for problems.
Of course it can also be something different so you might check the errno to get more details about the problem. It might also help if you do a packet capture to check what's going on on the wire. Best would be to do this capture on the client and server side to make sure that no middlebox like a firewall is tampering with the connection.
Also is it advised to maintain SSL context for new connections or should we destroy and re init the ssl context??
The context is just a collection of settings, certificates etc and is not affected by the SSL connection itself. You can reuse it for other connection later or at the same time.
EDIT, after the packet capture was attached:
There are multiple TCP connection in the file between client and server and only inside a single one the client tries to initiate a handshake, i.e. the ClientHello can be seen. The server closes the connection. A few things a interesting:
TCP handshake takes very long. The server only replies after 1.6 seconds after receiving the SYN with the SYN+ACK. Also the other replies take 800ms which is very long given that both addresses are in a private network (192.168.0.0). This might indicate a slow connection or VPN (this is about the latency of a satellite link), some middlebox (firewall) slowing everything down or a really slow server.
Client sends TLS 1.0 request. It might be that the server will do only TLS 1.1+. Some TLS stacks (see above) simply close the connection on such errors instead of sending an unsupported protocol alert. But given that the server is slow it might also be old and only support SSL 3.0 or lower.
Client does not use SNI extension. More and more servers need this and might simply close if they don't get the extension.
It is hard to know what really is going on without having access to the server. I recommend to look for error messages on the server side and use tools like SSLyze to check the requirements of the server, i.e. supported TLS versions, ciphers etc.
Apart from that client offers dangerously weak ciphers like various EXPORT ciphers. This looks for me like the defaults of a considerably old OpenSSL version.
I am looking through C websocket library libwebsockets client side example.
But i don't understand what the example purpose is.
Here is the example, this example have two connection (in the code wsi_dumb and wsi_mirror)which are same i think, and i don't know what second connection's purpose is.
using first conenction(in the code wsi_dumb), it seems to wait a request from server with libwebsocket_service() and then ...what with second connection(in the code wsi_mirror)?
And below is the part of the code i am saying.
wsi_dumb = libwebsocket_client_connect(context, address, port, use_ssl,
"/", argv[optind], argv[optind],
protocols[PROTOCOL_DUMB_INCREMENT].name, ietf_version);
/*
* sit there servicing the websocket context to handle incoming
* packets, and drawing random circles on the mirror protocol websocket
*/
n = 0;
while (n >= 0 && !was_closed) {
n = libwebsocket_service(context, 1000);
if (wsi_mirror == NULL) {
/* create a client websocket using mirror protocol */
wsi_mirror = libwebsocket_client_connect(context, address, port,
use_ssl, "/", argv[optind], argv[optind],
protocols[PROTOCOL_LWS_MIRROR].name, ietf_version);
mirror_lifetime = 10 + (random() & 1023);
fprintf(stderr, "opened mirror connection with %d lifetime\n", mirror_lifetime);
} else {
mirror_lifetime--;
if (mirror_lifetime == 0) {
fprintf(stderr, "closing mirror session\n");
libwebsocket_close_and_free_session(context,
wsi_mirror, LWS_CLOSE_STATUS_GOINGAWAY);
/*
* wsi_mirror will get set to NULL in
* callback when close completes
*/
}
}
}
I might mix it up but there is an example in libwebsockets where you just open a second browser (window or tab) and then see all the lines and circles you draw in the first browser mirrored and sent to the second browser.
I am using QMI SDK to start data session for the Sierra Wireless card MC7354 and Telus Sim Card. For now I can detect the device and the sim card like getting device info and IMSI number; however, I got some trouble with starting the data session. I follow the instructions in QMI SDK Documents and do the following code:
//set the default profile
ULONG rc3 = SetDefaultProfile(0,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL);
fprintf(stderr, "SetProfile - Return Code: %lu\n", rc3);
//start the session
ULONG technology = 1;
ULONG profile_idx = 1;
struct ssdatasession_params session;
session.action = 1;
session.pTechnology = &technology;
session.pProfileId3GPP = &profile_idx;
session.pProfileId3GPP2 = NULL;
session.ipfamily = 4;
ULONG rc4 = SLQSStartStopDataSession(&session);
fprintf(stderr, "Start Session - Return Code: %lu\n",rc4);
SetDefaultProfile is working fine because it returns me the success code, but for the SLQSStartStopDataSession method, it always gives me the return code "1026", which means
Requested operation would have no effect
Does anyone know where I make mistakes and how should I modify the code? What does this return code mean?
A "No Effect" error in WDS Start Network (the underlying command sent when you use SLQSStartStopDataSession()) actually means that the device is already connected. You likely have configured an automatic connection setup in the modem.