How to Send an Audio HEX file to Ble Device - android-bluetooth

I have a working ble device with android .
it sends and receives data finely through the Android app.
But Now My Problem is I want to send Some Audio HEX files to my BLE Device.
And its larger Than 20 bytes.
How Can i send such a data to a BLE Device. ??

For send data upper than 20bytes, you need to change the MTU-exchange.
To API LEVEL 21, you can use requestMtu (Android Developer), it negotiates with the peripherical device and you can define until 512 bytes.
The MTU value is defined in peripherical side. Remember the data size that you can send is MTU-3 bytes.
For API LEVEL less than 21, the MTU is pre-defined and you can't modify.

The size limit can be different than 20, I suppose the MTU size is negotiable, thus you should never hard code any assumptions on sizes.
to get it right, firstly inside the onCharacteristicReadRequest you simple check the offset and give all data from that point to the response.
then in onDescriptorWriteRequest, if the preparedWrite is set to true, you need to store the values you get, and combine them once you get onExecuteWrite() called.
Example implementation available at: https://github.com/DrJukka/BLETestStuff/blob/master/MyBLETest/app/src/main/java/org/thaliproject/p2p/mybletest/BLEAdvertiserLollipop.java

One way you could proceed is to split the audio hex in to small pieces of data .You can use Serial Port Profile to send these chunks of data .Once all data are received you can combine and store using some merging algorithm (google it) and later revert it back to an audio hex file .

Related

FFmpeg: what does av_parser_parse2 do?

When sending h264 data for frame decoding, it seems like a common method is to first call av_parser_parse2 from the libav library on the raw data.
I looked for documentation but I couldn't find anything other than some example codes. Does it group up packets of data so that the resulting data starts out with NAL headers so it can be perceived a frame?
The following is a link to a sample code that uses av_parser_parse2:
https://github.com/DJI-Mobile-SDK-Tutorials/Android-VideoStreamDecodingSample/blob/master/android-videostreamdecodingsample/jni/dji_video_jni.c
I would appreciate if anyone could explain those library details to me or link me resources for better understanding.
Thank you.
It is like you guessed, av_parser_parse2() for H.264 consumes input data, looks for NAL start codes 0x000001 and checks the NAL unit type looking for frame starts and outputs the input data, but with a different framing.
That is it consumes the input data, ignores its framing by putting all consecutive data into a big buffer and then restores the framing from the H.264 byte stream alone, which is possible because of the start codes and the NAL unit types. It does not increase or decrease the amount of data given to it. If you get 30k out, you have put 30k in. But maybe you did it in little pieces of around 1500 bytes, the payload of the network packets you received.
Btw, when the function declaration is not documented well, it is a good idea to look at the implementation.
Just to recover the framing is not involved enough to call it parsing. But the H.264 parser in ffmpeg also gathers some more information from the H.264 stream, eg. whether it is interlaced, so it really deserves its name.
It however does not decode the image data of the H.264 stream.
DJI's video transmission does not guarantee the data in each packet belongs to a single video frame. Mostly a packet contains only part of the data needed for a single frame. It also does not guarantee that a packet contains data from one frame and not two consecutive frames.
Android's MediaCodec need to be queued with buffers, each holding the full data for a single frame.
This is where av_parser_parse2() comes in. It gathers packets until it can find enough data for a full frame. This frame is then sent to MediaCodec for decoding.

Can a linux socket return data less than the underlying packet? [duplicate]

When will a TCP packet be fragmented at the application layer? When a TCP packet is sent from an application, will the recipient at the application layer ever receive the packet in two or more packets? If so, what conditions cause the packet to be divided. It seems like a packet won't be fragmented until it reaches the Ethernet (at the network layer) limit of 1500 bytes. But, that fragmentation will be transparent to the recipient at the application layer since the network layer will reassemble the fragments before sending the packet up to the next layer, right?
It will be split when it hits a network device with a lower MTU than the packet's size. Most ethernet devices are 1500, but it can often be smaller, like 1492 if that ethernet is going over PPPoE (DSL) because of the extra routing information, even lower if a second layer is added like Windows Internet Connection Sharing. And dialup is normally 576!
In general though you should remember that TCP is not a packet protocol. It uses packets at the lowest level to transmit over IP, but as far as the interface for any TCP stack is concerned, it is a stream protocol and has no requirement to provide you with a 1:1 relationship to the physical packets sent or received (for example most stacks will hold messages until a certain period of time has expired, or there are enough messages to maximize the size of the IP packet for the given MTU)
As an example if you sent two "packets" (call your send function twice), the receiving program might only receive 1 "packet" (the receiving TCP stack might combine them together). If you are implimenting a message type protocol over TCP, you should include a header at the beginning of each message (or some other header/footer mechansim) so that the receiving side can split the TCP stream back into individual messages, either when a message is received in two parts, or when several messages are received as a chunk.
Fragmentation should be transparent to a TCP application. Keep in mind that TCP is a stream protocol: you get a stream of data, not packets! If you are building your application based on the idea of complete data packets then you will have problems unless you add an abstraction layer to assemble whole packets from the stream and then pass the packets up to the application.
The question makes an assumption that is not true -- TCP does not deliver packets to its endpoints, rather, it sends a stream of bytes (octets). If an application writes two strings into TCP, it may be delivered as one string on the other end; likewise, one string may be delivered as two (or more) strings on the other end.
RFC 793, Section 1.5:
"The TCP is able to transfer a
continuous stream of octets in each
direction between its users by
packaging some number of octets into
segments for transmission through the
internet system."
The key words being continuous stream of octets (bytes).
RFC 793, Section 2.8:
"There is no necessary relationship
between push functions and segment
boundaries. The data in any particular
segment may be the result of a single
SEND call, in whole or part, or of
multiple SEND calls."
The entirety of section 2.8 is relevant.
At the application layer there are any number of reasons why the whole 1500 bytes may not show up one read. Various factors in the internal operating system and TCP stack may cause the application to get some bytes in one read call, and some in the next. Yes, the TCP stack has to re-assemble the packet before sending it up, but that doesn't mean your app is going to get it all in one shot (it is LIKELY will get it in one read, but it's not GUARANTEED to get it in one read).
TCP tries to guarantee in-order delivery of bytes, with error checking, automatic re-sends, etc happening behind your back. Think of it as a pipe at the app layer and don't get too bogged down in how the stack actually sends it over the network.
This page is a good source of information about some of the issues that others have brought up, namely the need for data encapsulation on an application protocol by application protocol basis Not quite authoritative in the sense you describe but it has examples and is sourced to some pretty big names in network programming.
If a packet exceeds the maximum MTU of a network device it will be broken up into multiple packets. (Note most equipment is set to 1500 bytes, but this is not a necessity.)
The reconstruction of the packet should be entirely transparent to the applications.
Different network segments can have different MTU values. In that case fragmentation can occur. For more information see TCP Maximum segment size
This (de)fragmentation happens in the TCP layer. In the application layer there are no more packets. TCP presents a contiguous data stream to the application.
A the "application layer" a TCP packet (well, segment really; TCP at its own layer doesn't know from packets) is never fragmented, since it doesn't exist. The application layer is where you see the data as a stream of bytes, delivered reliably and in order.
If you're thinking about it otherwise, you're probably approaching something in the wrong way. However, this is not to say that there might not be a layer above this, say, a sequence of messages delivered over this reliable, in-order bytestream.
Correct - the most informative way to see this is using Wireshark, an invaluable tool. Take the time to figure it out - has saved me several times, and gives a good reality check
If a 3000 byte packet enters an Ethernet network with a default MTU size of 1500 (for ethernet), it will be fragmented into two packets of each 1500 bytes in length. That is the only time I can think of.
Wireshark is your best bet for checking this. I have been using it for a while and am totally impressed

How to switch between data stream and control using (UART) bus

This question is about firmware for an 8 outgoing channels IR transmitter. It is a micro-controller board with 8 IR leds. The goal is to have a transmitter capable of sending streams of data using one or multiple channels.
The data is delivered to the board over UART and then transmitted over one or multiple channels.
My transmitter circuit is faster than the UART, so no flow control is required.
Currently I have the channel fixed in the firmware, so each byte from the UART is transmitted directly. This means that there is no way to set the desired channel over UART, which is what I want.
Of course, the easiest solution is to append the data byte with a control byte in which each bit represents one channel. This had the advantage that each byte can be routed to one or more channels, but of course increases overhead dramatically.
Because of the stream type of transmission, I am trying to avoid a length field in my transmitter.
My research work is in the network stack on top of this.
My question is if there are schemes or good practices to solve this. I expect that similar problems are in robotics, where sensor data streams cross control signals all the time, but I could not find a simple and elegant solution.
I generally use the SLIP transmission protocol in my projects. It is very fast, easy to implement, and works very good to frame ANY packet you want.
http://www.tcpipguide.com/free/t_SerialLineInternetProtocolSLIP.htm
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=slip%20protocol
Basically, you feed each byte to be transmitted or received into a function that uses 0xC0 as both a header and a footer. Since 0xC0 is a valid byte in the data packet that you could be sending a few transformations are made to data bytes of 0xC0 in order to GUARANTEED that 0xC0 will only be a header and footer.
Then using the reverse algorithm on the other side you can frame the incoming data and look for 0xC0 twice in the right order. This signifies a full packet that will can be buffered up and flagged for main cpu processing.
The SLIP will guarantee the right framing of the packet.
Then it is up to you to define your own packet format that exists as the data field inside the SLIP packet has correctly framed the packet.
I often do the following...
<0xC0> ...<0xC0>
Use different opcodes for your different channels. You can easily add another layer with Acknowledgements if you want.
Seems like the only sensible solution is to create a carrier protocol for the UART data. You might want this anyway, since UART has poor immunity to EMI. You can make it more reliable by including a CRC check to the protocol. (Please note that the built-in error handling of UART through start/stop/parity is very naive and very much outdated since the mid 70s or so.)
Typically these protocols go like <sync token> <header> <data> <checksum>, where the header may contain a data length and the data can then be of variable length.
Probably not an option at this point, but SPI would have been a much more pleasant interface to work with for this. You could then have one shift register per 8 IR diodes and select channel through the SPI slave select through some MUX/DEMUX circuit. Everything would work synchronously and no carrier protocol is needed. And it would completely remove the need for a MCU between the data sender and the diodes.

Do Audio Queue Services buffers need to be even multiples of packet size?

I'm trying to use Audio Queue Services to play mp3 audio that is being delivered from an external process. I am using NSTask and NSOutputHandle to get the output from the command - that part works fine. I'm using Audio File Stream Services to parse the data from the command - that seems to work as well. In my Audio File Stream Services Listener function, I'm not sure what to do with the packets that come in. It would be great if I could just throw them at the audio queue but apparently it doesn't work that way. You're supposed to define a series of buffers and enqueue them on the audio queue. Can the buffers correspond to the packets or do I have to somehow convert them? I'm not very good at C or pointer math so the idea of converting arbitrary-sized packets to non-matching-sized buffers it kind of scary to me. I've read the Apple docs many times but it only covers reading from a file, which seems to skip this whole packet/buffer conversion step.
You should be able to configure the AudioQueue such that the buffer sizes match your packet sizes. Additionally, the AudioQueue will do the job of decoding the mp3 - you shouldn't need to do any of your own conversions.
Use the inBufferByteSize parameter to configure the buffer size:
OSStatus AudioQueueAllocateBuffer (
AudioQueueRef inAQ,
UInt32 inBufferByteSize,
AudioQueueBufferRef *outBuffer
);
If your packets are all different sizes, you can use AudioQueueAllocateBuffer to allocate each buffer with that custom size before filling it, and free it instead of re-queueing it after use by the audio queue callback.
For less memory management (which impacts performance), if you know the max packet size, you can allocate a buffer that big, and then only partially fill that buffer (after checking the packet size to make sure it fits). There's a parameter, mAudioDataByteSize, for the amount with which each buffer is actually filled.

Jpeg wireless transfer with fwrite(); Need to handle lost packets

I am developing a device that takes a picture and transfers that picture to desktop computer receiver wirelessly through radio waves. On the receiver end, I am using C and fwrite() to rebuild the image file sent by split packets of data. Receiving a packet executes:
fwrite(&data[3], size, 1, filename);
data[3] is an unsigned 8 bit integer, data type u08.
I confirm that wired file transfer works. If the transmitter and receiver are directly connected, there is no problem.
However, the radio signal is not strong enough to guarantee that all packets will be received. In my testing, lost in transmission packets are common. If even one packet is lost, the image file becomes corrupt. Received rate is roughly 85%.
Every packet is numbered. If the received packet number is greater than the expected packet number, then the receiver knows that a packet has been dropped.
My solution is to loop and replace missing packets with a default packet while incrementing the expected packet number counter. Basically, I plan to fill lost pixels with black pixels, using the received packets to create the most completely picture possible. I do not know how to do this. I tried simply setting data[3] to 0 if the received packet number and expected packet number do not match up, but this did not work.
I welcome other proposed solutions.
U have not mentioned the image format. If you are sending crude RGB image then setting to zero thing should work but if you are playing with compressed images like jpeg and you have lost the header packets having information about block sizes or tables used for entropy encoding there is no way to get the image back.

Resources