BlueWallet: How to find channels infos - lightning-network

I setup a LN wallet with BlueWallet. Then I use a faucet (https://lightningnetworkstores.com/faucet) to get 6 SAT.
In BlueWallet I can see I'm connected to lndhub.io
My question: Where are the channel's infos ?
To open a channel, there must be a tx in the blockchain !
Also: is there a way to see the invoice of the ln tx ?

In their website bluewallets' lightning experience is described as:
You can use our hosted Lightning wallets or connect to your own node.
Channel
So, if you don't have your own (I guess LND only) Lightning node, I guess you are using a custodial wallet, so your balance is just a database entry behind the balance of a large lightning node controlled by bluewallet team. Your deposit transaction probably did not cause a channel to be opened, but it's just an incoming LN transaction to the bluewallet node which uses their available incoming liquidity.
Invoice
The faucet you linked uses LNurl withdraw to send money. This means that your wallet decoded the lnurl string or QRcode, getting something like https://lightningnetworkstores.com/api/lnurl2?id=C9yU9yPVHW37783hoaDABd. Your wallet then proceeded to send a get request to that url, which returned some info and a callback URL. Finally your wallet generated an invoice (through the central bluewallet node) and sent it to the server (via callback URL) to be paid.
However I have no clue whether or where you can find invoices in bluewallet.

Related

Is this system an optimal solution to sync an app with a server in real time efficiently?

Problem
I have an Android and iOS app, looking like a classic social network. I need to update UI in real time. Currently, I use a classic system of a client polling each second to a php script by HTTP. The php script bother the database every second for every client and responds, most of the time that there is no new update. If there is a new update, the php script process it and send it back to the client app.
There are 3 problems in this approach : (1) slow user experience (1 second delay each time) + high battery and data usage, (2) apache machines bothered each second by incoming HTTP request, (3) database machine bothered each second by the apaches machines (requesting if they are new stored updates in the main database).
I feel that this system could be substentially improved. For problem (1), I know a TCP connection can be "piped" to the app, but there is still problem (3) because the thread behind the socket still polls the database each second to know if they are new stored updates for their member ID.
Solution ?
I thought of a system to get rid of any activity (client, apaches and database) if there are no new updates. There would be : N apaches server on N machines, a load balancer exposed to the Internet. Behind, these apache server, connected only to local network, 1 "central" database and one "update" database, dedicated for the update system. The "update" database would store 2 tables :
1 table for the mapping between user tokens (and their member ID), and the thread ID and name of current apache machine holding the thread. One user ID may have several connection tokens, but one connection token is associated to only one unique couple (PID - machine name). Each time a user connects to the app, it would create a TCP con held by one thread (in one apache machine), and the [thread ID - machine name] would be stored in that table.
1 table to store the updates themselves. They contain all the informations needed to get up-to-date data (either in raw primitive form like string or int, or in "reference" form, telling the recipient TCP threads it needs to compute "at sending time" some params, for more complex data structures)
The system would be the following :
(1) A user wants to send a message to another user. The app client of the sender sends an HTTP request to the app API endpoint; the load balancer forwards the request to one of the apache machines.
(2) The apache server requests the main database to insert the "user message" row.
(3) The apache server requests the "update" database to know if the recipient has any currently connected device.
(4) if there is at least one connected device, insert an "update" row in the "update" database with all the informations needed, and wake up all thread associated to the recipient user ID (maybe using C signals ?).
(5) All the thread(s) associated to the recipient user ID wake up, they look in the "update" database for new updates associated with their user ID, they process their parameters (especially if there are references params to be computed), they send them back to the recipient devices via TCP.
So my final question is : is such a system feasible, reliable and if so, do you think it can be optimal in term of database and apache machines performence ?
I'm more a front-end programmer and I'm not used to implement complex server architecture, so I wanted to have some opinions before diving into the code, especially if I missed something in my approach (storing PIDs is reliable ? Is it possible for one machine to wake up a thread in another machine through local network ? ...)
PS : I already tried Firebase cloud messaging, but the problem is that they authorize only a one dimension array to be sent with update params. When dealing with complex data structure (like a "user message"), when I receive a signal from FCM in my client app, I still need to make an extra HTTP call to my server to retrieve the new "user message" JSON payload. So, good for my apaches and databases machines (they are not bothered when there is no new updates), bad for the client app that has to send additional HTTP requests. Once again, tell me if I missed something here :)
Thanks for reading

Webapp server data storage: Memory vs database

We are making a web application in Go with a MySQL database. Our users are allowed to only have one active client at a time. Much like Spotify allows you to only listen to music on one device at a time. To do this I made a map with as key the user ids and a reference to their active websocket connection as a value. Based on the websocket id that the client has to send in the header of the request we can identify weather the request comes from their active session.
My question is if it's a good practice to store data (in this case the map with the user ids and websockets) in a global space or is it better to store it in the database.
We don't expect to reach over 10000 simultaneously active clients. Average is probably gonna be around 1000.
If you only run one instance of the websocket server storing it in memory should be sufficient. Because if it for some reason goes down/restarts then all the connections will be lost and all the clients will have to create them again (and hence the list of connection will once again be populated by all the clients who want to use the service).
However, if you plan on scaling it horizontally so you have multiple websocket services behind a load balancer, then the connections may need to be stored in a database of some sort. And not because it necessarily needs to be more persistant but because you need to be able to check the request against all the services connections.
It is also possible to have a separate service which handles the incoming request and asks all the websocket services if any of them have the connection specified in the request. This could be done if you add a pub/sub queue and every websocket service subscribes to channels for all its websocket ids and the service that receives the request then publishes the websocket id, and the websocket services can then send back replies on a separate channel if they have that connection. You must decide how to handle if no one is responding (no websocket service has the websocket id). Either the channel does not exist, or you expect the answer within a specific time. Or you could publish the question on a general topic and expect all the websocket services to reply (yes or no).
And regarding whether you need to scale it I guess depends mostly on the underlying server you're running the service on. If I understand it correctly the websocket service will basically not do anything except from keeping track of its connections (you should add some ping pong to discover if connections are lost). Then your limitation should mainly be on how many file descriptors your system can handle at once. If that limit is much larger than your expected maximum number of users, then running only one server and storing everything in memory might be an OK solution!
Finally, if you're in the business of having a websocket open for all users, why not do all the "other" communication over that websocket connection instead of having them send HTTP requests with their websocket id? Perhaps HTTP fits better for your use case but could be something to think about :)

Distinguish sender from receiver with Socket.io

I have built a web application using JavaScript stack (MongoDB, ExpressJS, AngularJS, NodeJS). The registration works, the authentication works, the chat is which is using Socket.io works but I need a way of distinguishing which client is sending and which client is receiving the message in order to perform further functions with the user's data.
P.S. Since this is a project that I can not publish there are no code snippets in my post, hopefully it is alright
The ultimate design will depend on what you are trying to achieve. Is is "a one-to-one chat" service or maybe a "one to many broadcast". Is the service anonymous? How do you want users find each other? How secure does it need to be?
As a starting point I would assign a unique identifier (UID) to each connection (client). This will allow the server to direct traffic by creating "conversation" pairings or perhaps a list of listeners (subscribers) and writers (publishers).
A connected user could then enter the UID of a second connected user and your service can post messages back and forth using the uid pairing.
conversation(user123,user0987)
user123 send to user0987
user0987 send to user123
or go bulletin board/chat room style:
create a "board" - just a destination that is a list of all text sent
user123 "joins" board "MiscTalk"
user0987 "joins board "MiscTalk"
each sends text to the server, server adds that text to the board and each client polls the board for changes.
Every Socket can send or recieve, your program must track "who" is connected on a socket and direct traffic between them.
I think a fine way to handle the clients is creating a Handler class, a Client object and create a clientList in the handler, this way is easier to distinguish the clients. Some months ago I built a simple open source one-to-one random chat using socket.io, and here are the handler and the client class.
I hope this example can help you.
1.) Create a global server variable and bind connections property to it and whenever the authentication is true ,store socket_id against the id(user_id etc) which you get after decoding your token.
global.server=http.createServer(app);
server.connections={};
If server.connection hasOwnProperty(id) then use socket emit to send your message ,
else store the socket_id against your id and then send your message.
In this way you just need to know the unique token of the target user to send the message.
2.) You can also use the concept of room
If authentication is true use
socket.room=id ; socket.join(id)
when sending message use client.in(id).emit("YOUR-EVENT-NAME",message)
Note: Make your own flow , this is just an overview of what I have implemented in the past.You should consider using Redis for storing socket_ids.

How do you make Salesforce ping my application?

I have data in Salesforce and run another application that works with the same data. The current workflow is that when data is entered into the custom application, it sends the information to Salesforce via SOAP. I want to establish the reverse link; when a value is changed on the Salesforce side, I want Salesforce to ping my application with the changes. Does Salesforce have a feature to do this? Something equivalent to a trigger maybe?
My current solution is mindless iteration through all Salesforce records. This is slow, hits the API limit often, and keeps data stale too long.
You can do this using Streaming API
Introduction:
Use Streaming API to receive notifications for changes to Salesforce data.
Use to push relevant data in realtime, instead of having to refresh the screen to get new information. Protocols Use for Connection:
The Bayeux protocol and CometD both use long polling.
Bayeux is a protocol for transporting asynchronous messages, primarily over HTTP.
CometD is a scalable HTTP-based event routing bus that uses an AJAX push technology pattern known as Comet. It implements the Bayeux
protocol. The Salesforce servers use version 2.0 of CometD.
How it Works:
Create a PushTopic based on a SOQL query. This defines the channel. (PushTopic is a standard object).
Clients subscribe to the channel.
A record is created, updated, deleted, or undeleted (an event occurs). The changes to that record are evaluated.
If the record changes match the criteria of the PushTopic query, a notification is generated by the server and received by the subscribed
clients.
Please check this link : http://www.salesforce.com/developer/docs/api_streaming/

Google Channel API sends a message to all clients

I created a working Google Channel AP and now I would like to send a message to all clients.
I have two servlets. The first creates the channel and tells the clients the userid and token. The second one is called by an http post and should send the message.
To send a message to a client, I use:
channelService.sendMessage(new ChannelMessage(channelUserId, "This is a server message!"));
This sends the message just to one client. How could I send this to all?
Have I to store every Id which I use to create a channel and send the message for every id? How could I pass the Ids to the second servlet?
Using Channel API it is not possible to create one channel and then having many subscribers to it. The server creates a unique channel for individual JavaScript clients, so if you have the same Client ID the messages will be received only by one.
If you want to send the same message to multiple clients, in short, you will have to keep a track of active clients and send the same message to all of them.
If that approach sounds scary and messy, consider using PubNub for your push notification messages, where you can easily create one channel and have many subscribers. To make it run on Google App Engine is not that hard, since they support almost any platform or device.
I know this is an old question, but I just finished an open source project that uses the Channel API to implement a publish/subscribe model, i.e. you can have multiple users subscribe to a single topic, and then all those subscribers will be notified when anyone publishes a message to the topic. It also has some nice features like automatic message persistence if desired, and "return receipts", where a subscriber can be notified whenever OTHER subscribers receive that message. See https://github.com/adevine/gaewebpubsub#gae-web-pubsub. Licensed under Apache 2.0 license.

Resources