I am doing a project aimed at reading/writing data from Mifare Classic rfid card, by nrf52832(Cortex™-M4F based) together with TRF7970A (Multi-Protocol Fully Integrated 13.56-MHz NFC / RFID Transceiver IC).
The pre-authentication part is done according to ISO14443-3 standart (shown on a picture) and works fine (communication between nrf52832 and TRF7970A is done via SPI)
picture pre-authentication part
But after this part i bumped into problems with a authentication problems.
As far as TRF7970A doesn't support MIFARE cards, there is need to continue communication TRF7970A <-> MIFARE thru Special Direct Mode according to TI PDF "Using Special Direct Mode With the TRF7970A" (cannot link because of Stackoverflow limitations for new users)
Everything is configured according to TI PDF, but still i can't pass 3stage authentication.
To show the problem, 3 picture are attached. The process of authentication is captured by logic analyzer.
Captured signals
upper picture - failed attempt to pass auth1 stage. (TRF send an IRQ before it transmitted response from a card)
middle picture - successful attempt to pass auth1 stage. (Code remains the same, just sometimes it pass 1 stage, sometimes doesn't.)
bottom picture - after after successful passage of auth1, it comes to auth 2, where i never see the answer from trf7970/Mifare Classic card
The crypto part is taken from sdm mifare lib on trf7970AEVM
May bee someone have any ideas whats going wrong, or mb someone can direct me to the similar project.
I suspect the crypto keys used for cryptogram generation on host side and card side are different. That's why you are not seeing any response from the card.
Ensure that keys are the same.
Related
I have a zigbee2mqtt / home assistant setup working fine, and I'd like to try to make my own simple devices to connect to that network.
I got an xbee 3 board, and using micropython to start with I was able to connect to my network.
However the "interview" fails. The xbee receives a message with a cluster 0, profile 260 (home automation) and endpoint 230 (command). Not sure what the payload contains, that's not a string :
{'profile': 260, 'dest_ep': 230, 'broadcast': False, 'sender_nwk': 0, 'source_ep': 1, 'payload': b'\x10\x02\x00\x05\x00\x04\x00\x07\x00', 'sender_eui64': b"\x00\x12K\x00\x18\xe2I'", 'cluster': 0}
My question is what should I answer for the interview to succeed ?
I'm making only a basic sensor, I'd like to just report 1 weight reading periodically. I'm assuming I need to send back something saying I have one endpoint, on some cluster (not sure which, I guess something in the 400s) but I don't know what the format should be.
I couldn't find much info on this (baring how to use things like the Zigbee Cluster Library, which aren't python), any pointers or examples of end devices I could take a look at to understand how this interview process works ?
Unfortunately digi's examples all seem to involve xbee devices talking to each other, I couldn't find any examples of how to make a regular end device.
Thanks !
EDIT: Just found this great page which explains how this all works. Still need to figure out the exact bits I'll need and try it out, but now I know where to start !
This sounds a lot like the ZCL, and I'm not aware of an Open Source Python implementation of that protocol. Digi has an Open Source ANSI C Library that includes a ZCL implementation. If you can read C code, you might be able to decode that payload to see what it's asking. You might also need to handle some of the ZDO/ZDP (Zigbee Data Object/Device Profile) protocol on endpoint 0, by setting ATAO=3 (IIRC). There's also ZDO/ZDP code in that C library. (Full disclosure: I wrote most of the code in that library, including the Zigbee layer. But I haven't worked with Zigbee in a long time, so I'm rusty on protocol details.)
My recommendation would be to just hardcode hand-generated responses as much as possible. Figure out the expected format for requests, and determine what works as a response. If you can sniff the 802.15.4 traffic, or have your zigbee2mqtt gateway log activity with an existing device, you might be able to use its responses as a starting point for your implementation.
I have a bare-metal application running on a tiny 16 bit microcontroller (ST10) with 10BASE-T Ethernet (CS8900) and a Tcp/IP implementation based upon the EasyWeb project.
The application's main job is to control a led matrix display for public traffic passenger information. It generates display information with about about 41 fps and configurable display size of e.g. 160 × 32 pixel, 1 bit color depth (each led can be just either on or off).
Example:
There is a tiny webserver implemented, which provides the respective frame buffer content (equals to led matrix display content) as PNG or BMP for download (both uncompressed because of CPU load and 1 Bit color depth). So I can receive snapshots by e.g.:
wget http://$IP/content.png
or
wget http://$IP/content.bmp
or put appropriate html code into the controller's index.html to view that in a web browser.
I also could write html / javascript code to update that picture periodically, e.g. each second so that the user can see changes of the display content.
Now for the next step, I want to provide the display content as some kind of video stream and then put appropriate html code to my index.html or just open that "streaming URI" with e.g. vlc.
As my framebuffer bitmaps are built uncompressed, I expect a constant bitrate.
I'm not sure what's the best way to start with this.
(1) Which video format is the most easy to generate if I already have a PNG for each frame (but I have that PNG only for a couple of milliseconds and cannot buffer it for a longer time)?
Note that my target system is very resource restricted in both memory and computing power.
(2) Which way for distribution over IP?
I already have some tcp sockets open for listening on port 80. I could stream the video over HTTP (after received) by using chunked transfer encoding (each frame as an own chunk).
(Maybe HTTP Live Streaming doing like this?)
I'd also read about thinks like SCTP, RTP and RTSP but it looks like more work to implement this on my target. And as there is also the potential firewall drawback, I think I prefer HTTP for transport.
Please note, that the application is coded in plain C, without operating system or powerful libraries. All stuff is coded from the scratch, even the web server and PNG generation.
Edit 2017-09-14, tryout with APNG
As suggested by Nominal Animal, I gave a try with using APNG.
I'd extend my code to produce appropriate fcTL and fdAT chunks for each frame and provide that bla.apng with HTTP Content-Type image/apng.
After downloading those bla.apng it looks useful when e.g. opening in firefox or chrome (but not in
konqueror,
vlc,
dragon player,
gwenview).
Trying to stream that apng works nicely but only with firefox.
Chrome wants first to download the file completely.
So APNG might be a solution, but with the disadvantage that it currently only works with firefox. After further testing I found out, that 32 Bit versions of Firefox (55.0.2) crashing after about 1h of APNG playback were about 100 MiB of data has been transfered in this time. Looks that they don't discard old / obsolete frames.
Further restrictions: As APNG needs to have a 32 bit "sequence number" at each animation chunk (need 2 for each frame), there might to be a limit for the maximum playback duration. But for my frame rate of 24 ms this duration limit is at about 600 days and so I could live with.
Note that APNG mime type was specified by mozilla.org to be image/apng. But in my tests I found out that it's a bit better supported when my HTTP server delivers APNG with Content-Type image/png instead. E.g. Chromium and Safari on iOS will play my APNG files after download (but still not streaming). Even the wikipedia server delivers e.g. this beach ball APNG with Content-Type image/png.
Edit 2017-09-17, tryout with animated GIF
As also suggested by Nominal Animal, I now tried animated GIF.
Looks ok in some browsers and viewers after complete download (of e.g. 100 or 1000 frames).
Trying live streaming it looks ok in Firefox, Chrome, Opera, Rekonq and Safari (on macOS Sierra).
Not working Safari (on OSX El Capitan and iOS 10.3.1), Konqueror, vlc, dragon player, gwenview.
E.g. Safari (tested on iOS 10.3.3 and OSX El Capitan) first want to download the gif completely before display / playback.
Drawback of using GIF: For some reason (e.g. cpu usage) I don't want to implement data compression for the generated frame pictures. For e.g. PNG, I use uncompressed data in IDAT chunk and for a 160x32 PNG with 1 Bit color depth a got about 740 Byte for each frame. But when using GIF without compression, especially for 1 Bit black/white bitmaps, it blows up the pixel data by factor 3-4.
At first, embedded low-level devices not very friendly with very complex modern web browsers. It very bad idea to "connect" such sides. But if you have tech spec with this strong requirements...
MJPEG is well known for streaming video, but in your case it is very bad, as requires much CPU resources and produces bad compression ratio and high graphics quality impact. This is nature of jpeg compression - it's best with photographs (images with many gradients), but bad with pixel art (images with sharp lines).
Looks that they don't discard old / obsolete frames.
And this is correct behavior, since this is not video, but animation format and can be repeated! Exactly same will be with GIF format. Case with MJPEG may be better, as this is established as video stream.
If I were doing this project, I would do something like this:
No browser AT ALL. Write very simple native player with winapi or some low-level library to just create window, receive UDP packet and display binary data. In controller part, you must just fill udp packets and send it to client. UDP protocol is better for realtime streaming, it's drop packets (frames) in case of latency, very simple to maintain.
Stream with TCP, but raw data (1 bit per pixel). TCP will always produce some latency and caching, you can't avoid it. Same as before, but you don't need handshaking mechanism for starting video stream. Also, you can write your application in old good technologies like Flash and Applets, read raw socket and place your app in webpage.
You can try to stream AVI files with raw data over TCP (HTTP). Without indexes, it will unplayable almost everywhere, except VLC. Strange solution, but if you can't write client code and wand VLC - it will work.
You can write transcoder on intermediate server. For example, your controller sent UDP packets to this server, server transcode it in h264 and streams via RTMP to youtube... Your clients can play it with browsers, VLC, stream will in good quality upto few mbits/sec. But you need some server.
And finally, I think this is best solution: send to client only text, coordinates, animations and so on, everything what renders your controller. With Emscripten, you can convert your sources to JS and write exact same renderer in browser. As transport, you can use websockets or some tricks with long-long HTML page with multiple <script> elements, like we do in older days.
Please, tell me, which country/city have this public traffic passenger information display? It looks very cool. In my city every bus already have LED panel, but it just shows static text, it's just awful that the huge potential of the devices is not used.
Have you tried just piping this through a websocket and handling the binary data in javascript?
Every websocket frame sent would match a frame of your animation.
you would then take this data and draw it into an html canvas. This would work on every browser with websocket support - which would be quite a lot - and would give you all the flexibility you need. (and the player could be more high end than the "encoder" in the embedded device)
I have two XBee Pro Series 1 units. Both of them are in AP=2 mode (API mode). I have followed the instructions outlined under the "Series 1" section in XBee Configuration. I try to execute "ZnetSenderExample.java" and I can see it tries to send out a very simple "Xbee" string.
However, I keep getting timeouts on the receiving end saying it never gets any data.
Addtionally, I attempted the Unicast example on this page and found the same behavior. It does not work in X-CTU when in API mode.
I am using firmware 10EC.
How can I fix this problem?
Are the nodes on the same network? When you look at them in X-CTU, do they share the same operating network settings (channel, PAN ID)? Have you installed the API mode firmware, instead of the AT mode (sometimes called transparent serial) firmware?
Have you tried any of the examples on Digi's site, to confirm that the units are configured correctly?
I am working on creating a touch pad device (custom hardware but similar to an android device) that acts as a touchscreen drawing pad similar to the Wacom Bamboo drawing pads. However, the key feature of the device is instead of connecting it to the computer with wires or via Bluetooth, it connects to the local WiFi network and searches for devices with a port open (currently 5000 for testing purposes). Currently, I have a client written in C that when launched opens up a DatagramSocket on port 5000 and waits for a custom UDP packet containing normalized X, Y, and pressure. Then, for testing purposes, I am putting the normalized X and Y into SendInput. SendInput "works" however injecting packets into the computers current mouse is not what I want. Instead, I want to have it considered as a seperate input device so programs like gimp will be able to detect it and assign custom functions based on the data (ie: have gimp utilize the pressure data).
The problem is I dont know where to start to create a driver that does the former. I have been extensively looking at the winddk thinking that might be the key. The problem with the winddk is I cannot find any documentation on creating a HID driver using data that is not from a ps/2 or usb. This tutorial got me thinking about using IOCTLs, but I am not really sure how to make them be considered as input.
As a side note, in the title I said TCP/UDP because I am willing, and considering for security purposes, to change from UDP connection to TCP.
If someone can push me in the right direction or link me to some related documentation and samples, that would be awesome because right now I am lost. Thank you.
Wacom's drivers have always been atrociously bad, so I'm currently working on a hack.
The main problem I'm having is with calibration on a tablet PC. And before you say anything: no, just no. I've tried literally dozens of drivers, and of the few that work, none allows calibration of Wintab input. You can calibrate MS Ink, but that does nothing for apps like Photoshop that don't support the Ink API.
Having researched the issue a bit, the way I plan to hack it is by writing a wrapper for wintab32.dll which adjusts data packets as they're sent to applications, enabling calibration and perhaps tweaks to pressure sensitivity and whatever else I feel Wacom should have supported all along.
The calibration function is trivial, as is wrapping wintab32.dll and getting at the data that needs calibrating. As far as I can tell there are about half a dozen functions that request packet data, and I've inserted code in each of them to modify said data.
It works, too, at least if I test it on some wintab sample projects.
Photoshop is different, though. I can confirm that it loads the wrapped DLL, opens a wintab context and uses the API to request packet data, which is then modified en route. But then Photoshop ignores the modifications and somehow gets at the original, uncalibrated data and uses that. I can find nothing in the Wintab documentation to suggest how this is even possible.
I'm pretty stumped. Any thoughts?
Could it be that Photoshop only requests packets from Wintab in order to clear the packet queue, and then does something else to actually read the state of the stylus? And if so, what could that be? Some secret, obscure way of polling the data using WTInfo? A hook into the data stream between Wintab and the underlying driver/serial port?
I'm not very sure, but maybe the input from Ink API is also being written on the canvas. I mean, you are writing using two inputs now, which is WinTab and Ink. Got it?
If only you could ignore the Ink input, then that will show the right result.
P/S: This is only a hunch.