Is it possible to always have a shared object with SSL session id reuse but optionally reuse the connection?
Scenario: we have one long poll loop which needs connection reuse and ssl ticket id. Additionally there are from time to time WS calls that send some statuses and they need also Ssl ticket but would like to contain the connection only to that call not to have it linger for max connection age.
Is this possible ? Can i maybe extract SSL session and put it into another curl object? Or any other way?
Or on those one off calls i can put maxage = 0 or keepalive = 0
BR,
Thank you!
As far as I can tell, the SSL session ID is already reused on a given easy handle. To reuse across easy handles, you have to call curl_share_setopt() to set prameter CURLSHOPT_SHARE to option CURL_LOCK_DATA_SSL_SESSION.
Relevant documentation:
CURL_LOCK_DATA_SSL_SESSION
SSL session IDs will be shared across the easy handles using this
shared object. This will reduce the time spent in the SSL handshake
when reconnecting to the same server. Note SSL session IDs are reused
within the same easy handle by default. Note this symbol was added in
7.10.3 but was not implemented until 7.23.0.
From:
curl_share_setopt()
As seen in the curl mailing list
Disclaimer: I haven't personally tried this, but it appears to be supported.
Related
We are making a web application in Go with a MySQL database. Our users are allowed to only have one active client at a time. Much like Spotify allows you to only listen to music on one device at a time. To do this I made a map with as key the user ids and a reference to their active websocket connection as a value. Based on the websocket id that the client has to send in the header of the request we can identify weather the request comes from their active session.
My question is if it's a good practice to store data (in this case the map with the user ids and websockets) in a global space or is it better to store it in the database.
We don't expect to reach over 10000 simultaneously active clients. Average is probably gonna be around 1000.
If you only run one instance of the websocket server storing it in memory should be sufficient. Because if it for some reason goes down/restarts then all the connections will be lost and all the clients will have to create them again (and hence the list of connection will once again be populated by all the clients who want to use the service).
However, if you plan on scaling it horizontally so you have multiple websocket services behind a load balancer, then the connections may need to be stored in a database of some sort. And not because it necessarily needs to be more persistant but because you need to be able to check the request against all the services connections.
It is also possible to have a separate service which handles the incoming request and asks all the websocket services if any of them have the connection specified in the request. This could be done if you add a pub/sub queue and every websocket service subscribes to channels for all its websocket ids and the service that receives the request then publishes the websocket id, and the websocket services can then send back replies on a separate channel if they have that connection. You must decide how to handle if no one is responding (no websocket service has the websocket id). Either the channel does not exist, or you expect the answer within a specific time. Or you could publish the question on a general topic and expect all the websocket services to reply (yes or no).
And regarding whether you need to scale it I guess depends mostly on the underlying server you're running the service on. If I understand it correctly the websocket service will basically not do anything except from keeping track of its connections (you should add some ping pong to discover if connections are lost). Then your limitation should mainly be on how many file descriptors your system can handle at once. If that limit is much larger than your expected maximum number of users, then running only one server and storing everything in memory might be an OK solution!
Finally, if you're in the business of having a websocket open for all users, why not do all the "other" communication over that websocket connection instead of having them send HTTP requests with their websocket id? Perhaps HTTP fits better for your use case but could be something to think about :)
I use libcurl share+easy interface and I need to "fix up" some cookie info that is set by a webserver.
In my case I use multiple threads and I would like to know at what point received cookie is "shared" to all other curl handles and when it's the right time to fix received cookie data:
right when I received it from remote server (but at this point I'm not sure if the corrupt cookie data might be picked up by some other thread that was making a new http request at the same time)
on making new requests to ensure that I don't end up using corrupt cookie in new http requests.
Here's my code flow. I call curl_easy_perform. When response containing Set-Cookie comes in, libcurl at first parses that cookie and stores it in its internal store (which gets shared in case of curl share interface).
Then curl_easy_perform returns and now I try to check if server send specific cookie that I need to "fix up". To check that cookie the only way is to use CURLINFO_COOKIELIST.
My question is: from the time curl parsed incoming Set-Cookie header (with invalid cookie data) to the time when I inspect cookies using CURLINFO_COOKIELIST the updated invalid cookie might be picked up by another thread. That means that to avoid that issue I don't see any other options other than inspecting cookies on each new request in case if there is another thread out there that might have updated cookies with invalid data.
Even in this case I still may end up using invalid cookie data. In other words, there is no proper solution for this problem.
What's the right approach?
Typically when using libcurl in multiple threads, you use one handle in each thread and they don't share anything. Then it doesn't matter when you modify cookies since each handle (and thus thread) operates independently.
If you make the threads share cookie state, like with the share interface, then you have locking mutexes setup that protects the data objects from being accessed from more than one thread at a time anyway so you can just proceed and update the cookies using the correct API whenever you feel like.
If you're using the multi interface, it does parallel transfers in the same thread and thus you can update cookies whenever you like without risking any problems with parallelisms.
I have built a web application using JavaScript stack (MongoDB, ExpressJS, AngularJS, NodeJS). The registration works, the authentication works, the chat is which is using Socket.io works but I need a way of distinguishing which client is sending and which client is receiving the message in order to perform further functions with the user's data.
P.S. Since this is a project that I can not publish there are no code snippets in my post, hopefully it is alright
The ultimate design will depend on what you are trying to achieve. Is is "a one-to-one chat" service or maybe a "one to many broadcast". Is the service anonymous? How do you want users find each other? How secure does it need to be?
As a starting point I would assign a unique identifier (UID) to each connection (client). This will allow the server to direct traffic by creating "conversation" pairings or perhaps a list of listeners (subscribers) and writers (publishers).
A connected user could then enter the UID of a second connected user and your service can post messages back and forth using the uid pairing.
conversation(user123,user0987)
user123 send to user0987
user0987 send to user123
or go bulletin board/chat room style:
create a "board" - just a destination that is a list of all text sent
user123 "joins" board "MiscTalk"
user0987 "joins board "MiscTalk"
each sends text to the server, server adds that text to the board and each client polls the board for changes.
Every Socket can send or recieve, your program must track "who" is connected on a socket and direct traffic between them.
I think a fine way to handle the clients is creating a Handler class, a Client object and create a clientList in the handler, this way is easier to distinguish the clients. Some months ago I built a simple open source one-to-one random chat using socket.io, and here are the handler and the client class.
I hope this example can help you.
1.) Create a global server variable and bind connections property to it and whenever the authentication is true ,store socket_id against the id(user_id etc) which you get after decoding your token.
global.server=http.createServer(app);
server.connections={};
If server.connection hasOwnProperty(id) then use socket emit to send your message ,
else store the socket_id against your id and then send your message.
In this way you just need to know the unique token of the target user to send the message.
2.) You can also use the concept of room
If authentication is true use
socket.room=id ; socket.join(id)
when sending message use client.in(id).emit("YOUR-EVENT-NAME",message)
Note: Make your own flow , this is just an overview of what I have implemented in the past.You should consider using Redis for storing socket_ids.
I am stuck to retrieve the Key_block generated after the SSL handshake. I implemented a simple Client.cpp/Server.cpp program that is working well for exchanging encrypted data.
I would like to retrieve the key_block because I want to re-use it and perform my own encryption in another communication, but without having another handshake again.
I tried :
ssl->s3->tmp.key_block
but it retrieves an empty string (?!) and of course
ssl->s3->tmp.key_block_length
retrieves 0 value.
I call these methods just after SSL_accept(ssl) succeeds.
Once I've been able to catch this key_block, I'll need to find the encryption function used by SSL_write(...)
Hope you hear me, because the openSSL doc seems encrypted to my eyes.. =)
XY problem. You don't need this. Just open another SSL connection to the same target and it should re-use the same SSL session and therefore the same session master secret. Maybe even the same session key, but what do you care, as long as it's secure? You seem to be just trying to avoid a second full SSL handshake, but you can do that by suitable configuration at the client.
I would like to know if you call several GET Routes in AngularJS via a RESTAPI, if for each GET a new connection is established or if it's just one connection with several "underconnections" (Threads)
Yes every Ajax call opens a new connection. A browser can't just do multiple threads under one connection for Ajax purposes because multiple connections can come from different sources. But now with HTTP/2 your initial connection has the potential to come along one pipe. You can potentially open a websocket and funnel traffic down a single pipe but that isn't Ajax.