Why is UART used over other interfaces for serial console? [closed] - c

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
In embedded systems, we often use UART to transmit data to a serial console on a PC, and these days with a USB-to-uart serial converter showing up as a virtual com port. Why has UART become the go-to for this instead of other serial interfaces like I2C and SPI?

Because is simple, was designed to be used on longer distances (I mean meters not kilometers :)), very standard and every uC has it.
I2C & SPI are not designed to be used outside the PCB (I know that people use them on longer distances). Those interfaces are used do connect other ICs to your microcontroller.

Maximum distance of RS232 can be a few meter, I2C and SPI doesn't work well with distances longer than about 200 - 500mm (depending on pullups, speed, collector current, ...).
SPI and I2C need a master and slave(s), there is no such difference between 2 UART hosts.
You need fewer pins than SPI (when pins like DTR, DSR, RTS are omitted) or a parallel port.
You don't need to worry about where to put a pullup-resistor.
Both hosts can start a transmission asynchronous, with I2C and SPI the master needs to poll the slave before he can transmit data.
A host doesn't need to answer immediately. This can be important on a PC under load where the reaction time can be very high (50ms or so). Try to write a program for a PC that can reliable answer in less than 1ms.

Related

File system and UART [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have an embedded system with an UART that I communicate with over an USB to RS485 cable. I can read and write data to flash by sending serial commands. The software on the device is written in C++.
I would like to implement a file-system that my computer would recognize when I plug in the USB, and would let me browse the files on the embedded devices' flash.
How would I go about doing this?
From the PC's view the "device" is the cable, not your board. Logically the USB<->RS485 converter adds an RS485 interface to your PC rather then a USB interface to your board - even if the USB/485 chip were on your board that would be logically if not physically true. Therefore it cannot appear as a USB mass-storage device, because it is explicitly a USB CDC/ACM device.
For your board to appear as a true USB mass storage device, you would need to use a USB Device controller - some (but not all) Blackfin devices have an on-chip USB controller, and analogue devices provide a USB device stack library for that. In that case you would need to implement and use a USB interface on your board rather that a serial adaptor cable.
If you lack a USB controller or only wish to use the serial interface, then it may be simplest perhaps to implement a TCP/IP stack with PPP and use FTP. That would make the serial link far more flexible in any case (can then support Telnet and other protocols simultaneously). Using PPP in Linux is relatively straightforward, in Windows it is possible but it is tied up in the dial-up connection support, so is not particularly intuitive for a direct cable connection. In this case you'd need to use an FTP client on your PC as it would not appear as a direct file-system device to the PC.

XOpenDisplay fails when run from daemon (C language) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
i'm working in a simple project on my raspberry pi, which flash some leds in differ ways on some system events (like disk reading, ethernet communications, processor overload), and these leds need to be shut off some time after system is idle (these leds will behave varying their intensity when no sys activity detected).
To achieve idle detection, i'm using XScreenSaver, until here, everything works flawlessly.
As my project needed to be executed as daemon (etc/init.d) and needed to run with root privileges (because the pigpio library) the communication with X Server (via XOpenDisplay) is returning NULL every time, even when system is ready and in graphical interface. On terminal, running this manually, everything works perfectly.
as my research goes, i've understood that isn't possible to access X Server when there is no console available at boot time, and there is no way to access it for security reasons.
so i ask, how i could achieve this (detect idle time) on a simplest way possible? ( i tried self restart, tried setting DISPLAY variable on start script nothing seems to work.) I'm new on linux development and can't figure how to properly solve this.
Just awnsering my own question, if anyone having the same issue as me.
Detecting System Inactivity (Idle) outside X graphical interface, is just a matter of USB Keyboard / mouse activity by monitoring their IRQs (usually IRQ 1 /IRQ 12) on /proc/interrupt or more easy (supporting other USB Input like even Joysticks!) by monitoring /proc/stat on "softirq" line, second numeric column that contains numeric amount of bytes transferred when these devices has some / any input (mouse moving or key pressed / released)
This achieved easily in C by time to time, fopen / fread on these fields comparing the values with old ones.
Thanks a lot to my intensive researchs on Linux internals & User Olaf that have a huge knowledge on discover the obvious.

How primary memory is organised in a microcontroller? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
My query is :How the memory is organized and managed in the microcontroller?
(It doesn't have any OS i.e no presence of MMU).
I am working over zynq 7000 (ZC702) FPGA,It has seperate arm core and seperate DDR memory connected together with axi interconnects.
I wrote 11 1111 1111 in decimal to DDR(10 1's),it is giving me same after reading.
when i write 111 1111 1111 in decimal to DDR(11 1's),it gives me -1.
here the whole physical memory is consumed.This will not be the case when i use any microcontroller.
Who will manage the memory in microcontroller?
Okay that is a cortex-A9, in no way shape or form is that a microcontroller.
When you read the arm documentation for that architecture you will find the mmu in there. It as expected is part of the Cortex-A core.
Who/how/when is the ddr initialized? Someone has to initialize it before you can write to it and read it back. 10 ones in decimal fits in a 32 bit write/read, 11 ones, takes 33 bits, what size write/read did you use? How much ddr do you have? I suspect you are not ready to be talking to ddr
There is 256K of on chip ram, maybe you should mess with that first.
All of this is in the zilinx and arm documentation. Experienced bootloader developers can take months to get DDR initialized, how/who/when was that done, do you have dram tests to confirm that it is up and working if you werent the one to initialize it? Has zilinx provided routines for that (how is your ddr initialized, who did it (your code, some code before yours, logic, etc) and when was it done, before your code ran?). Maybe it is your job to initialize.
the MMU you just read the arm docs as with everything about that core. Then work with the xilinx docs for more, and then of course who did the rest of the fpga design to connect the arm core to the dram? what address space are they using, what decoding, what alignments do they support, what axi transfers are supported, etc? That is not something anyone here could do you have to talk to the fpga logic designers for your specific design.
If you have no operating system then you are running bare metal. You the programmer are responsible for memory management. You dont need the mmu necessarily, sometimes it helps sometimes it just adds more work. Depends completely on your programming task and overall design of the software and system. The mmu is hardware, an operating system is software that runs on hardware. One thing might use the other but they are in no way tied to each other any more than the interrupt controller or dram controller or uart, etc are tied to the operating system. The operating system or other software may use that hardware, but you can use them with other software as well.

Microcontroller programming in C [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What is the difference between the SPI and I²C protocols, used to program a microcontroller?
Please specify the pins used in each case.
SPI and I²C are bus protocols, and each is well defined:
http://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus
http://en.wikipedia.org/wiki/I2c
They are very similar in how they work, but they aren't the same and the differences aren't minor.
Depending on the microcontroller, they may have either, both, multiple of each, or none. They may share pins, and they might not. Refer to the datasheet of your microcontroller.
What is the difference between the SPI and I²C protocols
SPI is a 4-wire signalling scheme with a single bus master and several slaves. There's a chip select signal (CS, or slave select SS) dedicated clock signal (SCK) and two data signals, one for reception (MISO = Master In ← Slave Out), one for transmission (MOSI = Master Out → Slave In).
Transmission starts by asserting (= pulling low) SS, then for each bit MISO and MOSI are set after a SCK high→low transition and the actual transfer happens on a SCK low→high transition. Beyond that, there's no such thing as a "standard protocol" carried over SPI beyond the transmission of groups of bits; every component with an SPI interface may define its own.
I²C is a 2-wire signalling scheme, capable of multiple bus masters. It's a lot more complex than SPI, so I suggest you read the Wikipedia article about it.
used to program a microcontroller?
No, because that depends on the microcontroller you're using. Also do you want to write a program for that microcontroller that does communication over SPI or I²C? If this is the case I can give you no answer, because it depends on the controller you're using. It usually boils down to configuring the SPI peripheral (if it got one) or implementing the bit-banging on the GPIOs (which again is specific to the microcontroller used) yourself.
Or do you actually want to program the microcontroller flash over this? If the later is the case take note that the actual programming method depends on the actual microcontroller used and may not happen over SPI at all. And I know of no microcontroller that actually uses I²C for flash programming (except if it has been programmed with a bootloader that does the talking).
SPI is used for programming the Atmel ATMega microcontrollers. But the XMega microcontrollers are programmed using an interface called PDI, which is a completely different beast (uses the reset pin for clock and as a dedicated PDI Data pin).
Most ARM microcontrollers are programmed using JTAG. Then there are interfaces like SWI (which is related to, but not the same as JTAG).
SPI and I²C are defined, see Wikipedia.
Both are master and slave based and can share at least some of the bus. Both are serial based: I²C shares the data line so it is bidirectional, and SPI there is from master and to master. In both cases, these data lines are wired or (you drive zero or let it float and the pull up resistor pulls the line high). I²C is address based, and SPI has separate chip selects for each entity on the bus.
Your question needs work. Some devices can be programmed using SPI, but we generally talk about the microcontroller being the master and using these to program someone else. SPI and I²C busses are standards, but different vendors use them in ways to make it difficult to have a generic hardware interface. SPI more often than I²C will have hardware assist, but it is a good idea to learn to bit bang either and you can always fall back on that.

Implementing kernel bypass for a network card [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
My situation:
I would like the data received on a network card to reach my application as fast as possible. I have concluded that the best (as in lowest latency) solution is to implement a network stack in my user space.
The network traffic can be a proprietary protocol (if it makes writing the network stack easier) because it is simply between two local computers.
1) What is the bare minimum list of functions my network stack will need to implement?
2) Would I need to remove/disable whatever network stack is currently in my Linux/how would I do this?
3) How exactly would I write the driver? I presume I would need to find exactly where the driver code gets called and then instead of the driver/network stack being called, I would instead send the data to a piece of memory which I can access from my application?
I think the already built-in PF_PACKET socket type does exactly what you want to implement.
Drawback: The application must be started with root rights.
There are some enhancements to the PF_PACKET system that are described on this page:
Linux packet mmap
The Kernel is in control of the NIC card. Whenever you pass data between kernel and user-space, there is a context-switch between the kernel rings, which is costly. My understanding is that you would use the standard API's while setting the buffers to a larger size allowing larger chunks of data to be copied between user and kernel-space at a time, reducing the number of context switches for a given size of data.
As far as implementing your own stack, it is unlikely a single person can created a faster network stack than the one built into the kernel.
If the linux kernel is not capable of processing packets at a speed you require, you might want to investigate NIC cards with more onboard hardware processing power. These sorts of things are used for network throughput testing etc.

Resources