Bluez, multiple GATT servers running on the same computer - c

In other socket applications you can’t open a port that is already in use but bluetoothd seems to accept several listening GATT servers running in parallel, how is that possible?
I try to setup a GATT server using bluez 5.35 on Raspberry Pi Jessie. I have made an application that starts the GATT server much like the example btgatt-server.c using l2cap socket. I have a custom characteristic that a client application can connect and use. I have also setup to enable advertising using hci commands (it is set to enable just after listen() command on the socket).
I have done so the application auto start in rc.local. My problem is that after reboot, sometimes I don't see my own characteristics but I get a complete other list of services/characteristics. If I don’t start my own application and only enable advertising (sudo hciconfig hci0 leadv) I see the same list so it seems to be running a GATT server by default.
What mechanism in bluez decide if my services/characteristics or the other ones (I guess loaded by default plugins) are visible? They are never combined and visible at the same time and I don’t see any error messages during my application startup even if I can’t see the characteristics from the client and don’t get anything by accept(). How can I be sure my characteristics is always visible?

Related

How to wait for Windows TCP Network at startup?

I know this should be obvious, but I have found far too many DIFFERENT answers and the ones I've tried all fail (sometimes or all the time), so...
We are working on a service and some applications that run at startup on a Windows 10 computer that performs an automatic login. The service and applications require Windows sockets for TCP, UDP and Multicast. Most of the time, our programs fail because they get errors about the network not being ready and such. Currently, we work around this by just adding a dumb, fixed length delay time before attempting to start, but we would prefer to start as soon at the network is ready to be used.
Our most recent attempt was to wait on the LanmanWorkstation (Workstation) service, but that generally reports it is running/ready before the sockets functions will succeed. I have also seen suggestions to use LanmanServer (Server) or Netman (Network Connections) or maybe even Tcpip (TCP/IP Protocol Driver), but I cannot find anything definitive. One would think this is a common requirement, so why would Microsoft make the info so difficult to find?
Ahem. Does any know a definitive method for a service or application to wait until winsock functions will succeed before using them? Short of a spin wait on a failing winsock function, of course!

Local Communication - 127.0.0.1 vs. IPC

I am not clearly understanding the difference between using TCP socket with client connecting to 127.0.0.1 server address and other IPC such as message queues. Since both are used for communication within the same host, why at all someone would go for socket approach leaving the message queue one, as in this case, sockets will cause more overhead compared to the queues.
The differences that I am seeing:-
In case of sockets we can see the contents in wireshark, in queues there is no such way.
The point of the loopback interface / address is not that you write programs to use it specifically.
The point is that it lets you talk to network services running on the local computer in the same way that you would talk to network services running on a remote host. For instance, if I'm developing a website, I can start up a test instance of its server on my local computer and then point my browser at http://127.0.0.1/ and there it is. I don't have to modify the code of my browser to talk over AF_UNIX sockets or whatever first. Similarly, if I am writing an application that needs a database, I might start out with the database running on the same computer as the application, talking to it over loopback, but then later when the database gets bigger I can move it to a dedicated host and I don't have to change anything other than the connection configuration.
You are absolutely correct that local IPC has lower overhead, and should be used when the two processes that need to communicate will always be on the same machine.
TCP and IPC both approach we use for inter process communication in distributed architecture. If processes are running in same machine we will go for message queue but surely not TCP. But suppose one application is running in one box and another application is running in a different box definitely we have to go for TCP for inter process communication. Even web services also internally implement TCP for communicating to a remote application.
But still we need a TCP base communication in the same machine between two process where synchronize communication is must. For example if you send a request for an account information of a client and waiting for the response you need this approach. But if you just need to send a client information to a server to store it in a table and you don't need an answer from that server whether your records has been stored successfully or not you just go for a queue only to drop the message.

windows: how to stop irda dongle periodic auto detecting

I'm writing some code in C for an IrDA project on one win7 32bit computer. I have another computer setup to display any data received via in infrared. This part works. However the as soon as I connect the IrDA dongle to the PC, it starts to send periodic data for searching other IrDA devices. I want to disable this behavior programmatically so I see only the data sends as a result of my code. Anyone know which command to use? Is it WSASetService? I didn't learn socket programming, not sure what "removes from the registry a service instance within one or more namespaces. " really means. http://msdn.microsoft.com/en-us/library/windows/desktop/ms742211%28v=vs.85%29.aspx
Have you disabled the Infrared Monitor Service manually?
I experienced problems with this functionality in win7 when using Windows to communicating with an embedded micro-controller based device that worked well with windows XP.
I disabled the Infrared Monitor Service manually and found that windows was still polling the IrDA periodically!
I have not found any documentation available that describes it or how to disable it, I will continue searching...

How do I detect the presence/absence of internet connection on a machine?

I need to detect the presence/absence of internet connection. More precisely, let us suppose that the application is broken up into 2 parts - A and B.
A is responsible for checking whether or not the system is connected to the internet. If it finds that there is no connection, it starts up part B. And as soon as it detects that there is a network connection, it kills B and continues its own work.
What would be the best way to do the A part of the application? Continual pings sounds hideous. There has to be a better way of doing this (preferably in C).
With sufficient privilege you can test the various network interfaces and examine their state. This would tell you if any of the interfaces was connected to a network and operating. However, this won't tell you if the connection is actually usable, i.e., connected to the internet (or your local net if that's all you need). I don't know of anyway to do that short of actually using it.
Using ICMP (ping) can be useful at a low level, but presumably what you need is a connection to an actual endpoint via TCP/IP to do real work. I would say that you should change the design of your application so that B is responsible for indicating when it is unable to continue due to the absence of resources that it relies on -- network or otherwise. A and B should communicate so that A is aware of the situation and is able to either kill B or respond to B terminating itself and thus continuing its work.
A lot of companies have measures in place to prevent outgoing ICMP requests, TCP connections to ports other than 80/443 for example, or even to prevent you from reaching the internet directly by (transparently) proxying your traffic.
Under an internet connection I would understand any way to contact the outside, be it UDP, TCP or ICMP. Depending on what your application needs to contact the internet for, I would suggest to check over the same protocol, as that is the only thing that matters to your app.
If your application uses HTTP to communicate to an external source, try to connect to a few sites you would suspect to not be blacklisted and that have a reliable uptime. Like google.com, microsoft.com, apple.com, and so on...
Edit:
I am unsure what the specifics are, so let me give you an example with a hypothetical situation.
Application A collects data on the system it is running on and forwards it to a Web Service listening on yourserverhost.yourcompany.com:80
Application B would basically take over the job of the Web Service when it is down and log everything so no data is lost.
When all is well, App A will be sending the data to your web service
Once this connection drops, you immediatly launch App B (the obvious remark here would be, why not keep App B running as a failsafe)
App A connects to App B and forwards what it had been buffering
App A continues to try to reestablish the connection to your Web Service and once it is back up will request App B to stop
If the problem you are facing is nothing like this, please provide a more concrete description of what App A and App B are supposed to be doing. I will be more than happy to help.
In your code, you have to check whether the internet connection exists by using a socket to open a connection to a website.
Firstrun: Ask user to input the network parameters, like proxy settings. Save this info.
Next runs: Use these settings to check for the Internet connection. You may simply do a DNS search.
If results are negative, ask user to check settings.
Check whether the cable is connected , if so ping your internet connection to any host as google.com.
ping google.com

How to get a more stable socket connection in Linux/C

I'm running a game website where users connect using an Adobe Flash client to a C server running on a Fedora Linux box.
Often users complain about disconnects. Usually they're "Connection reset by peer"-disconnects.
Is there any way to make the connection more stable or does it all depend on the route from the user host to my server?
One thing I tried is to make it more stable by sending PING in clear text every other minute to avoid timeout problems.
Anyone got more ideas?
You are not exhausting the number of socket/memory use/cpu that the server process is given on the server, are you?
Do check with ulimit.
Also, if possible try to trace the error message in the source code (when a RST packet is sent--), i.e. when a send() or accept() returns an error value. In such cases print a debug message into the logs; if you really fancy debugging it do a simulation of the server:
run it into debug mode on a separate machine (possibly a clone of the server)
simulate thousands of connection (or find a network harnessing program)
backtrace the call and/or sniff the connection
where are you running the server?
at home? at work? at a hosting facility?
this will make a very big difference.
Can you design your app to connect to two sockets on the server and then load balance or make it active/passive (or active/active)?
You can use SO_KEEPALIVE TCP socket option.

Resources