MQTT ISSUE: Broker closing connection as i send connect packet - c

Its my first post here. Correct me if im wrong
I'm using free scale controller with G620 module for connecting to server.
I started to implement MQTT client.
The communication with G620 GPRS module through UART.
Through AT commands i connected to the MQTTbroker.
As soon as i sent the connect packet, broker closing the connection.
Need help or suggestion.
The connect packet is: { 0x10,0x12,0x00,0x04,M,Q,T,T,0x04,0x00,0x3C,0x00,0x00,0x06,Z,1,2,1,2,3
}

There are two scenarios the server is disconnecting you as per MQTT Protocol.
1. If you are violating protocol format
2. If the timeout of the connection is exceeded
Reasons for Protocol Violation termination:
- You might have wrongly framed out some protocol bytes. Cross verify with the protocol document.
- You might be already connected and trying to connect again. Check for server side logs if you have access.
- As you are sending the frame through UART, you might have used a for loop to send the bytes. If the for loop counter is based from "strlen(Connectpacket)", you will not get the exact count as "strlen" will terminate after 0x00. So the server will receive half packet and disconnects you for violation.
Your protocol seems to be invalid,
0x10 - MQTT Control Packet type
0x12 - Remaining Length
0x00 - Length MSB
0x04 - Length LSB
M
Q
T
T - Protocol Name
0x04 - Protocol Level
0x00 - Connect Flags
0x3C - Keep Alive MSB
0x00 - Keep Alive LSB
0x00,0x06,Z,1,2,1,2,3 - What are there bytes used for? Cross verify with protocol document.
Set clean session bit as 1. Set keep alive for 0x00 MSB 0x3C LSB.

Related

TCP sequence and acknowledgment numbers | do these numbers matter during TCP handshake

I have made a simple program in C.
Currenty I am trying to establish TCP handshake between server and client.
So when I recieve [SYN] flag then I respond with [SYN+ACK] flags (along with any sequence number, so I am choosing it as zero) and any ACK number
And in code when I receive in server ACK flag from client in TCP handshake's final packet I send ACK flag along with my own chosen sequence number (any number) and ACK numebr = one incremented of received SYC packet's sequence number because no data has been transmitted from server.
But I am not getting any ACK packet. my client is keep sending SYN packets and my server keep responding with Syn +ACK packet.
I like to know are sequence numbers and acknowledgement number during TCP handshake process are irreverent so I can focus on other fields which may be why my client ignoring my SYN responded SYC+ACK packet like IP and tcp checksum. Or is there anything wrong with my sequence and acknowlegement number handling first
In addition I am using TUN device in Linux that I created from server program
TCP sequence and acknowledgement numbers are absolutely NOT irrelevant during handshake (or any time in TCP connection). In fact, TCP handshake is there so that the server and the client can learn each other's sequence numbers. Your packets are ignored because ACKs do not match.
The ACK number in SYNACK packet is the sequence number in first SYN + 1. And the ACK number in last ACK packet is the seq. in SYNACK packet + 1.
In addition to the standard, linked by #Barmar, you can use this picture (source)

What is the reason for using UNIX sockets "zero-length datagrams"?

In recv()'s man page I found that return value can be zero in such case:
Datagram sockets in various domains (e.g., the UNIX and Internet
domains) permit zero-length datagrams. When such a datagram is
received, the return value is 0.
When or why should one use zero-length datagrams in UNIX socket intercommunication? What's its purpose?
One such use is to unblock a recvfrom() call when you wish to close a UDP service thread - set a 'terminate' flag and send the zero-length datagram on the localhost stack.
One example I stumbled upon yesterday while researching the answer to another question.
In the old RFC 868 protocol for getting the current time of a remote server, the workflow for using UDP looks like:
When used via UDP the time service works as follows:
S: Listen on port 37 (45 octal).
U: Send an empty datagram to port 37.
S: Receive the empty datagram.
S: Send a datagram containing the time as a 32 bit binary number.
U: Receive the time datagram.
The server listens for a datagram on port 37. When a datagram
arrives, the server returns a datagram containing the 32-bit time
value. If the server is unable to determine the time at its site, it
should discard the arriving datagram and make no reply.
In this case, the server has to receive a datagram to be alerted that a user is requesting the time (And to know what address to send a reply to), due to UDP's connectionless nature (The TCP version just needs to connect to the server). The contents of that datagram are ignored, so might as well specify that it should be empty.

Reading response from a TOSR0X-T relay remotely connected with XBee module

I'd like to get the states of the relays on the board from the relay, but I can get only ACK back.
I have two XBee modules, one is connected to a computer with USB, and acts as a Serial device, the other is connected to a TOSR0X-T relay. I am planning to add more XBee modules to the network with more relays later, so I am using API mode, not the simple AT mode, because I need to address them separately.
I am sending TX frames with 64bit address to the remote XBee to open or close relays. That works fine, I get the ACK response frames properly. However if I send a message to get the relay states by sending 0x5B, I get back an ACK only, and I can find no way to get the actual data back indicating the relay states.
I am using node-serialport and the X-CTU software, but could not read the data, and the only example I found used both XBees connected to the same machine - that way an RX appeared on the destination XBee - but I need to get that remotely somehow.
TOSR0X-T documentation here only tells me about talking to it via TX messages, so I have no clue if I can achieve anything with commands (and how to do that).
The ACK you're seeing is likely the network-layer ACK, telling you that the remote XBee module received your packet. You need to use "AT mode" on the XBee connected to the TOSR0X-T, and address your TX API frames correctly for that mode (cluster 0x0011 of endpoint 0xE8).
If you've configured the XBee on your computer as the coordinator, the default settings of 0 for DH and DL on the relay's XBee module will result in all received serial bytes getting relayed back to the XBee on your computer, and coming through as RX frames.
After some experiments I could solve my problem.
Considering the CH (Channel) and ID (PAN ID) are matching - that is a requirement to be able to set up the network, I set up my XBees like this:
The Coordinator XBee (the one attached to the computer):
CE = 1 (for being coordinator)
MY = 0001
DH = 0
DL = 0
AP = 1 (in API mode)
The first End Point (the one attached to the TOSR0X-T):
CE = 0 (for being an endpoint)
MY = 000A (whatever you want), use FFFF for 64 bit responses
DH = 0
DL = 0001 (This is one I missed. It should be the Coordinator's MY)
AP = 0 (in AP mode)
So basically I did everything right, except the DH/DL addressing. For the Endpoint the DL must be set to the MY of the Coordinator. I read some articles where using FFFF and FFFE and things like that to set broadcasting, and I think I was confused by those informations.

RTP packet drop issue(?)

I have a client and a server, where server sends the audio data by RTP packets encapsulated inside UDP. Client receives the packets. As UDP has no flow control, client checks for the sequence number of the packet and rearranges them if they come out of order.
My question here is, I see client never receives packet with some sequence number, as show below in the wireshark -
If this is the case, when i play the audio at client side, it is distorted(obvious). How do i avoid it? What factors effect these? Should i set the socket buffer size to a large value?
Appreciate reply in advance.
EDIT 1: This issue is on QNX platform and not on Linux.
I observed the output of "netstat -p udp" to see if that gives any hint about why packets are getting dropped on QNX and not on Linux.
QNX:
SOCK=/dev/d_usb3/ netstat -p udp
udp:
8673 datagrams received
0 with incomplete header
 60 with bad data length field
0 with bad checksum
0 dropped due to no socket
2 broadcast/multicast datagrams dropped due to no socket
0 dropped due to full socket buffers
8611 delivered
8592 PCB hash misses
On Linux I see netstat shows no packet drops with the same server and same audio!
Any leads? Why this might be? Driver issue? Networking stack?
You need to specify how you are handling lost packets in your client.
If you lose packets, that means you have missing data in your audio stream. So your client has to "do something" where it is missing data. Some options are
- play silence (makes a cracking noise due to sharp envelop to 0)
- fade to silence
- estimate waveform by examining adjacent data
- play noise
You play cannot misalign packets or play them with missing packets. For example, suppose you you get packet 1,2,3,4 and 6. You are missing packet 5. You cannot play packet 4 then play packet 6. Something has to happen to fill the space of packet 5.
See this post for more info.

Properly send a RST packet to a TCP client and server as a gateway

I am programming a gateway which one of the functionality is to destroy connections when enough packets have been exchanged. I would like to know how to properly form RST packets to send to both the client and server to terminate the connection.
To test this, I use ftp connections/sessions. Right now, I am seeing that when I send the RST packets, the client endlessly replies with SYN packets, while the server simply continues the datastream with ACK packets. Note that after I decide to destroy the connection, I block the traffic between both ends.
I am thinking there may be something wrong with the way I handle my SEQ and ACK numbers. I have not been able to find ressources to explain what to do with the SEQ and ACK numbers when sending a RST packet specifically. Right now, I set the SEQ to a new random number (with rand()) and set the ACK to 0 (since I am not using the ACK flag). I invert the source address with destination address and source port with destination port, and have seen that I correctly calculate checksums.
I seems like both the client and server do not accept the termination.
I don't know what 'resources' you are using, but this seems to be completely covered under 'Reset Generation' in section 3.4 of RFC 793. The RST has sequence number zero and the ACK field is set to the incoming ACK field plus the length, etc as described there several times.

Resources