Make REST API call from C without using libcurl [closed] - c

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I was trying to make REST calls from C; came across libcurl which was successful in doing that dynamically. But the code needs to be ported on to a Cortex M0 board, which need a lower footprint. Is there any workaround? All I need is to make a REST API call from C without any external library or overhead.

Well, how low do you want to go?
C doesn't know anything about REST, it doesn't know HTTP, not even TCP or something like a network interface. On bare metal, you'd start by reading the hardware specs of your network interface card and programming it (through ports, memory mapped registers, etc....) -- You'd have to understand ARP, IP, ICMP etc (and, of course, implement it), just to get a TCP connection on top of that.
Assuming there's an operating system in place, you'll be given some API, then the answer would depend on what this API allows. A typical level would be a "socket abstraction", like BSD sockets, which gives you functionality to establish a TCP connection. So, "all" you'd have to do is implement a HTTP client on top of that.
Unfortunately, HTTP itself is a complex protocol. You'd have to implement all the requests you need, with Content-Types, transfer encodings, etc and also handle all possible server responses appropriately. These are a lot. Bring content negotiation to the table, partial responses, etc ... it's "endless" work. That's exactly the reason there are libraries like curl that already implement all of this for you.
So, sorry to say that, but there's no simple answer possible giving you what you want here. If you want to get the job done, use a library. Maybe you can find something smaller than libcurl.
What you can do is to compile the library yourself, linking it statically and using compiler options like gcc's -ffunction-sections -fdata-sections and the linker option --gc-sections in an attempt to drop code from the library you don't use, this might help to reduce size.

Related

Why were blocking calls invented when the underlying nature of computers is a state machine? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I understand that implementing a state machine is the perfect way to program the computer. Since state machines are typically programmed using non-blocking calls, I wonder why blocking calls similar to the Berkeley sockets APIs were invented? Don't they encourage bad programming practice?
Thanks in advance.
Edit: The idea behind this question is to establish the fact that a multi-context event driven state machine based on non-blocking IO is indeed the perfect way to program the computer. Everything else is amateur. People who think otherwise should allow for a debate.
Your question makes some pretty substantial assertions / assumptions:
the underlying nature of computers is a state machine?
Well, surely you can model computers as state machines, but that does not in itself mean that such a model represents some fundamental "underlying nature".
I understand that implementing a state machine is the perfect way to program the computer.
Then by all means, write all your programs as state machines. Good luck.
In real life, some tasks can be conveniently and effectively written as state machines, but there are many for which a state-machine approach would be cumbersome to write and difficult to understand or maintain.
There is no "perfect" way to program a computer. Indeed, it would be pretty pretentious to claim perfection even for a single program.
Since state machines are typically programmed using non-blocking calls,
You don't say? I think you would need to be a lot more specific about what you mean by this. I have written state-machine based software at times in the past, and I would not characterize any of it as having been implemented using non-blocking calls, nor as exposing a non-blocking external API.
I wonder why blocking calls similar to the Berkeley sockets APIs were invented? Don't they encourage bad programming practice?
Before we could even consider this question, you would have to define what you mean by "bad programming practice". From what I can see, however, you are assuming the conclusion:
you assert that a state-machine approach to programming is ideal, with the implication that anything else is sub-par.
you claim, without support, that only non-blocking calls have state-machine nature
you conclude that anything that uses blocking calls must exhibit bad programming practice.
Your conclusion is not consistent with the prevailing opinion and practice of the programming community, to the extent that I can gauge it. Your argument is hollow and unconvincing.
Multiple processes (or later, threads) with synchronous (blocking) calls are easy to understand and program and easily composable - that is, you can take two tasks that are made up of synchronous calls and run them at the same time (via a scheduler) without having to modify either one in any way.
Programming as a state machine, on the other hand, requires either manually adding states (possibly in combinatorically growing numbers) when you add new code, or some kind of tightly-coupled framework of registering handlers and explicitly storing state for the next handler to use.
What? 'blocking call' implies preemptive multitasking. The kernel of such an OS is a state-machine with interrupts as input events and a set of running threads as output actions.
The OS kernel is a state machine, and blocking calls conveniently move the FSM functionality into the kernel so that you don't have to write the miserable state-machines in user apps.
I understand that implementing a state machine is the perfect way to program the computer
What? 'perfect'? What? Have you ever developed, debugged and delivered any non-trivial multithreaded app?

Hardware accelerated cryptography -- fasted access from userspace? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
So I have an embedded (Linux) system with a crypto co-processor and two userspace applications which need to use it: SSL (httpd) and proprietary code; maximizing speed and efficiency is the main requirement. I spent they day examining the kernel hooks and part registers and have come to three possible solutions:
1) Access the co-processor directly since it's memory mapped;
2) Use the /dev/crypto library
3) Use OpenSSL calls for my proprietary application
During standard operation, SSL is used very rarely and the proprietary application produces a very heavy load of plaintext needing crypto. Here's the pros and cons for each option as I see them and how I got to this quandry:
1) Direct access
--Pros: probably the fastest method, closest to complete control of the crypto co-processor, least overhead, great for the proprietary app
--Cons: Race conditions or interference could occur when SSL is being used... I'm not sure how bad two userspace apps trying to asynchronously share a hardware resource could hork things up, and I may not know until a customer finds out and complains
2) /dev/crypto
--Pros: SSL already uses it, I believe it's session-based, so sharing problems would be mitigated if not avoided completely
--Cons: More overhead, lack of documentation for proper ioctl()s to configure the co-processor correctly for optimal, high duty cycle use
3) Use SSL
--Pros: already setup and working with /dev/crypto, rarely used... so it's just there and available for crypto calls, and probably the best resource sharing management
--Cons: Probably the most overhead, may not be using /dev/crypto as efficiently as it possible, things could get bursty when both the proprietary app and httpd require SSL
I'd really like to use option 1, and will code up a test framework in the morning, but I'm curious if anyone else out there has had this problem or has any opinions. Thanks!

What Linux Port is always writing [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I am experimenting reading (and eventually writing) serial ports in C. I want to be able to connect to a port on debian and read in some data but I need a port that is writing (speaking). I am new to linux programming.
What port, that will definitely be present and talking in debian, can I connect to to read some data in?
Can you also suggest a port I can eventually use to write to aswell?
I've tried connecting to /dev/ttyUSB1 that this example uses but that port doesn't exist.
I would suggest either open /dev/random (or /dev/urandom) as Paul suggests or create your own socket and read/write to that. Don't just pick an arbitrary socket and hope it has information that no other process needed.
If this is your first time working with sockets I would also suggest that you try playing around with things in a language like python simply because you don't need to recompile to see where you went wrong and the warnings are often more readable (take a look at https://docs.python.org/2/howto/sockets.html)
As a side note: If you have access to an arduino you might like to try connecting to that socket (usually something like ser = serial.Serial('/dev/ttyACM0', 9600) in python).

Implementing kernel bypass for a network card [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
My situation:
I would like the data received on a network card to reach my application as fast as possible. I have concluded that the best (as in lowest latency) solution is to implement a network stack in my user space.
The network traffic can be a proprietary protocol (if it makes writing the network stack easier) because it is simply between two local computers.
1) What is the bare minimum list of functions my network stack will need to implement?
2) Would I need to remove/disable whatever network stack is currently in my Linux/how would I do this?
3) How exactly would I write the driver? I presume I would need to find exactly where the driver code gets called and then instead of the driver/network stack being called, I would instead send the data to a piece of memory which I can access from my application?
I think the already built-in PF_PACKET socket type does exactly what you want to implement.
Drawback: The application must be started with root rights.
There are some enhancements to the PF_PACKET system that are described on this page:
Linux packet mmap
The Kernel is in control of the NIC card. Whenever you pass data between kernel and user-space, there is a context-switch between the kernel rings, which is costly. My understanding is that you would use the standard API's while setting the buffers to a larger size allowing larger chunks of data to be copied between user and kernel-space at a time, reducing the number of context switches for a given size of data.
As far as implementing your own stack, it is unlikely a single person can created a faster network stack than the one built into the kernel.
If the linux kernel is not capable of processing packets at a speed you require, you might want to investigate NIC cards with more onboard hardware processing power. These sorts of things are used for network throughput testing etc.

A good serial communications protocol/stack for embedded devices? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
After writing several different custom serial protocols for various projects, I've started to become frustrated with re-inventing the wheel every time. In lieu of continuing to develop custom solutions for every project, I've been searching for a more general solution. I was wondering if anyone knows of a serial protocol (or better yet, implementation) that meets the following requirements:
Support multiple devices. We'd like to be able to support an RS485 bus.
Guaranteed delivery. Some sort of acknowledgement mechanism, and some simple error detection (CRC16 is probably fine).
Not master/slave. Ideally the slave(s) would be able to send data asynchronously. This is mostly just for aesthetic reasons, the concept of polling each slave doesn't feel right to me.
OS independence. Ideally it wouldn't rely on a preemptive multitasking environment at all. I'm willing to concede this if I can get the other stuff.
ANSI C. We need to be able to compile it for several different architectures.
Speed isn't too much of an issue, we're willing to give up some speed in order to meet some of those other needs. We would, however, like to minimize the amount of required resources.
I'm about to start implementing a sliding window protocol with piggybacked ACKs and without selective repeat, but thought that perhaps someone could save me the trouble. Does anyone know of an existing project that I could leverage? Or perhaps a better strategy?
UPDATE
I have seriously considered a TCP/IP implementation, but was really hoping for something more lightweight. Many of the features of TCP/IP are overkill for what I'm trying to do. I'm willing to accept (begrudgingly) that perhaps the features I want just aren't included in lighter protocols.
UPDATE 2
Thanks for the tips on CAN. I have looked at it in the past and will probably use it in the future. I'd really like the library to handle the acknowledgements, buffering, retries etc, though. I guess I'm more looking for a network/transport layer instead of a datalink/physical layer.
UPDATE 3
So it sounds like the state of the art in this area is:
A trimmed down TCP/IP stack. Probably starting with something like lwIP or uIP.
A CAN based implementation, it would probably rely heavily on the CAN bus, so it wouldn't be useful on other physical layers. Something like CAN Festival could help along the way.
An HDLC or SDLC implementation (like this one). This is probably the route we'll take.
Please feel free to post more answers if you come across this question.
Have you considered HDLC or SDLC?
There's also LAP/D (Link Access Protocol, D-Channel).
Uyless Black's "Data Link Protocols" is always nearby on my bookshelf - you might find some useful material in there too (even peruse the TOC & research the different protocols)
CAN meets a number of your criteria:
Support multiple devices: It supports a large number of devices on one bus. It's not, however, compatible with RS485.
Guaranteed delivery: The physical layer uses bit-stuffing and a CRC, all of which are implemented in hardware on an increasing number of modern embedded processors. If you need acknlowedgement, you need to add that on top yourself.
Not master/slave: There are no masters or slaves; all devices can transmit whenever they want. The processor hardware deals with arbitration and contention.
OS independence: Not applicable; it's a low-level bus. What you put on top of that is up to you.
ANSI C: Again, not applicable.
Speed: Typically, up to 1 Mbps up to 40 m; you can choose your own speed for your application.
As mentioned, its definition is fairly low-level, so there's still work to be done to turn it into a full protocol to meet your needs. However, the fact that a lot of the work is done in hardware for you does it make very useful for a variety of applications.
I'd guess a reasonable starting point could be uIP.
(Adding Wikipedia article on µIP since original link is dead.)
Would you consider the MODBUS protocol? It is master/slave oriented, so the slave could not initiate the transfer, but otherwise is lightweight for implementation, free, and well supported with high level tools. You should just get a grasp on their terminology /like holding register, input register, output coil etc).
Phy level could be RS232, RS485, Ethernet...
Have a look at Microcontroller Internet Network (MIN):
https://github.com/min-protocol/min
Inspired by CAN but using standard UART hardware, with Fletcher's checksum and frame format checking for error detection and byte-stuffing to mark a frame header.
Take a look at Profibus.
If you don't want master/slave, I think you ought to do the arbitration with hardware (Canbus, FlexRay).

Resources