C on smartcards [closed] - c

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have the task to write some crypto stuff in C and make it lightweight. The idea behind making it lightweight is, that it could run on a smartcard which doesn't offer much computational power and memory. It won't come to actually running it on a smartcard and it won't be for any practical use.
However, I'm curious if I could run the program on a smartcard without major adjustments. I'm aware that I'd probably have to change something in the IO-part but let's keep that aside. And by "smartcard" I mean a regular smartcard which could be afforded by the majority of private individuals and not some fancy stuff.
To restrict the question a little more:
Could I run the program without modification if I only use 8-bit integers in my program and the architecture is >= 8-bit, aswell as stay below the memory limit?
If no, why not?

Due to their limited CPU power, SCs mostly have their own security/encryption hardware and OS. The latter for instance controls access to critical features like the interface and key storage. Also, some of them have countermeasures against typical attack scenarios like differential cryptoanalysis, etc.
There are standards available, but which to pick depends on the actual card used. There are various SCs on the market with different capabilities and demands.
It is unlikely that your program will run without major modifications.
Note that the specs are mostly only available under NDA and possibly with additional guarantees from your side. The actual level depends on the capabilities and the card vendor.

Related

Can Python ever be faster than C in C based OS? Why? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
If Python was coded(based) on C then can Python ever surpass the C?
I know that the next stages be assembly, binaries when they communicate with OS and hardware. I have two assumptions that since most of the Operating Systems were coded in C then if the any code works on top of that OS, it is not possible that Python can be faster.
All things being equal, code running in an interpreter will execute more slowly than code running natively. However, things are rarely equal, and while I can't think of an example offhand, I would not be surprised if there were circumstances where a Python solution could execute faster than a C-based one (it'd probably be pretty esoteric, though).
Beyond that, raw execution speed is only one metric, and it's not the most important. It doesn't matter how fast your code is if it does the wrong thing, or nukes a server if someone sneezes, or exposes your system to malware, or it takes you a year to deliver a solution.
Python provides a bunch of high-level abstractions and tools that C doesn't, leading to faster development time (which is where the expense really is). You don't have to worry (as much) about memory leaks, buffer overruns, etc.
There is no such thing as a silver bullet, and no language is best at all things. There are times when a C-based solution is the right answer, and there are times when a Python-based solution is the right answer.

Best practices for securing an embedded device [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm currently developing an embedded controller, which will be connected to a potentially hostile environment. Since the Controller is quite limited (~50 MHz, ~16 KiB RAM), I do not have the luxury of an operating system which can help me with memory protection.
What is considered best practices for securing an embedded device? I know of techniques like stack guards, but since I'm not familiar with embedded development, I'm looking for some kind of guidance.
Edit: I'm using an ATSAMD21G18, which does not have an MMU. It's the same as used on many Arduinos. The controller will be conntected to a public bus (as in wiring, not the transportation method) thus I cannot assume anything about the behaviour of other bus members.
I am however not trying to protect IP, e.g. I'm not worried about somebody figuring out the contents of my controller. It's more about application security, e.g. how do I limit the harm done by somebody trying to take over my controller by exploiting e.g. buffer overruns.
Automotive MCU:s typically have a "copy cat" protection which blocks any form of debugger access - you can't read anything out of the MCU or debug it while this is active, you have to erase everything.
Check out MCU:s by silicon vendors with a lot to automative customers, such as NXP/Freescale or Renesas.

Quantifiable differences between RTOS kernels for small ARM microcontrollers [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
There are many different RTOS available for microcontrollers. I am specifically looking for RTOS that support the ARM Cortex M processors. Also, I am not interested in closed source solutions.
Attempting to compare the relative merits of each RTOS from websites and mailing lists seems pretty difficult as they mostly seem to have equivalent features and do the same thing. The real differences become apparently only after trying to use each RTOS for some tasks.
I know this is somewhat subjective question and probably hard to answer - but there must be many people out there who have actually tried several different RTOS and formed an opinion of the relative merits of each one.
I am specifically interested in FreeRTOS, ChibiOS and Coocox CoOS, but other choices are also very welcome.
For example: it would seem that in ChibiOS, ISRs can call any system functions, but those calls must be wrapped in chSysLockFromIsr()/chSysUnlockFromIsr() and the code is not preemptable during those sections. In CoOS, the only functions callable are the ones starting with isr_ such as isr_PostSem(), isr_PostMail(), isr_PostQueueMail() and
isr_SetFlag(), but those functions internally use a service request queue which means most of the request is preemptable.
Some of the features that one could take into account while choosing the RTOS:
context-switch time
interrupt latency
synchronization mechanisms (flags, semaphores, mutexes, mailboxes, queues, ...)
priority inversion handling
memory management support (i.e. memory pools)
scheduling policies support
MMU support
process support
memory footprint
efficiency of the kernel itself
POSIX vs. non-POSIX API's
software eco-system available (a.k.a middleware)
...
Which point(s) to put more focus, depends on the very application you're going to run. But generally, these are the things I can remember of which make difference between various RTOS's.

What areas of computer science are particularly relevant to mobile development? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
This isn't a platform specific question - rather I'm interested in the general platform independent areas of computer science that are particularly relevant to mobile applications development.
For example, things like compression techniques, distributed synchronisation algorithims etc.. what theoretical concepts have you found relevant, useful or enabling when building mobile apps?
Human-computer interaction is an important consideration, when you consider that mobile devices have all sorts of inputs that a "normal" computer would not - such as touch screens (with multi-touch), one or more microphones, camera(s), etc...
Taken from embedded software development is the habit to handle scarce resources such as CPU load and battery life.
My 2 cents: Augmented reality, NFC (RFID)
process calculi
I don't understand why "All of computer science" isn't relevant.
(even things large large scale computing is relevant: you can't have
a small device in your hands that does really complicated stuff
on large scale unless there's a big engine someplace else).
Derecursivation (turning recursive code into an iterative loop) came handy once because some systems try to limit the default available stack size.
Pagination (how the OS splits heap memory into "page" units) is useful to understand when deciding the size of temporary buffers.
The notion of context: context-awareness and/or context-orientation
And also mobile ad-hoc network

What alternatives to Hans Boehm GC are out there for small devices? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'd like to use a virtual machine like NekoVM into a small device but to build it, it requires Boehm GC, however there is no port of that GC to that small device so I was wondering if there is any alternative to it, something that could be done exclusively with C code?
I'd say your best option would be to port the GC to your platform, for which there are instructions (libgc porting instructions).
Additionally, it should be possible to swap out the GC implementation (NekoVM FAQ), see vm/alloc.c file.
EDIT:
Hopefully useful additional links: (untested)
Smieciuch Garbage Collector
libgcroots (based on libgc 7, abstracts architecture dependant bits)
Squirrel programming language
Perhaps you'd be better off with Lua, which has a very small but powerful virtual machine, has its own garbage collector built in, and runs on any platform that supports ANSI Standard C. With just a little effort you can even build Lua on a machine that lacks standard input and standard output. I have seen Lua running on an embedded device that was a small LCD touch screen with an embedded CPU stuck on the back. Neko is good work, but I think you'll find Lua every bit as satisfying.
I could suggest TinyGC (tinygc.sf.net) - an independent lightweight implementation of the BoehmGC targeting small devices. It is fully API-compatible (even more, binary compatible) with BoehmGC v7+ but only a small subset of the API is implemented (but sufficient for Java/GCJ-like memory management) and there is no automatic threads and static data roots registration. The latter, however, may require some efforts to make NekoVM work with it (i.e., call GC_register_my_thread() and GC_add_roots()).

Resources