How primary memory is organised in a microcontroller? [closed] - c

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
My query is :How the memory is organized and managed in the microcontroller?
(It doesn't have any OS i.e no presence of MMU).
I am working over zynq 7000 (ZC702) FPGA,It has seperate arm core and seperate DDR memory connected together with axi interconnects.
I wrote 11 1111 1111 in decimal to DDR(10 1's),it is giving me same after reading.
when i write 111 1111 1111 in decimal to DDR(11 1's),it gives me -1.
here the whole physical memory is consumed.This will not be the case when i use any microcontroller.
Who will manage the memory in microcontroller?

Okay that is a cortex-A9, in no way shape or form is that a microcontroller.
When you read the arm documentation for that architecture you will find the mmu in there. It as expected is part of the Cortex-A core.
Who/how/when is the ddr initialized? Someone has to initialize it before you can write to it and read it back. 10 ones in decimal fits in a 32 bit write/read, 11 ones, takes 33 bits, what size write/read did you use? How much ddr do you have? I suspect you are not ready to be talking to ddr
There is 256K of on chip ram, maybe you should mess with that first.
All of this is in the zilinx and arm documentation. Experienced bootloader developers can take months to get DDR initialized, how/who/when was that done, do you have dram tests to confirm that it is up and working if you werent the one to initialize it? Has zilinx provided routines for that (how is your ddr initialized, who did it (your code, some code before yours, logic, etc) and when was it done, before your code ran?). Maybe it is your job to initialize.
the MMU you just read the arm docs as with everything about that core. Then work with the xilinx docs for more, and then of course who did the rest of the fpga design to connect the arm core to the dram? what address space are they using, what decoding, what alignments do they support, what axi transfers are supported, etc? That is not something anyone here could do you have to talk to the fpga logic designers for your specific design.
If you have no operating system then you are running bare metal. You the programmer are responsible for memory management. You dont need the mmu necessarily, sometimes it helps sometimes it just adds more work. Depends completely on your programming task and overall design of the software and system. The mmu is hardware, an operating system is software that runs on hardware. One thing might use the other but they are in no way tied to each other any more than the interrupt controller or dram controller or uart, etc are tied to the operating system. The operating system or other software may use that hardware, but you can use them with other software as well.

Related

Explaination of ARM (especifically mobile) Peripherals Addressing and Bus architecture?

I will first say that I'm not expert in the field and my question might contain misunderstanding, in which case, I'll be glad if you correct me and attach resources so I can learn further details.
I'm trying to figure out the way that the system bus and how the various devices that appear in a mobile device (such as sensors chips, wifi/BT SoC, touch panel, etc.) are addressed by the CPU (and by other MCUs).
In the PC world we have the bus arbitrator that route the commands/data to the devices, and, afaik, the addresses are hardwired on the board (correct me if I'm wrong). However, in the mobile world I didn't find any evidence of that type of addressing; I did find that ARM has standardized the Advanced Microcontroller Bus Architecture, I don't know, though, whether that standard applied for the components (cpu-cores) which lies inside the same SoC (that is Exynos, OMAP, Snapdragon etc.) or also influence peripheral interfaces. Specifically I'm asking what component is responsible on allocating addresses to peripheral devices and MMIO addresses?
A more basic question would be whether there even exist a bus management in the mobile device architecture or maybe there is some kind of "star" topology (where the CPU is the center).
From this question I get the impression that these devices are considered as platform devices, i.e., devices that are connected directly to the CPU, and not through a bus. Still, my question is how does the OS knows how to address them? Then other threads, this and this about platform devices/drivers made me confused..
A difference between ARM and the x86 is PIO. There are no special instruction on the ARM to access an I/O device. Everything is done through memory mapped I/O.
A second difference is the ARM (and RISC in general) has a separate load/store unit(s) that are separate from normal logic.
A third difference is that ARM licenses both the architecture and logic core. The first is used by companies like Apple, Samsung, etc who make a clean room version of the cores. For the second set, who actually buy the logic, the ARM CPU will include something from the AMBA family.
Other peripherals from ARM such as a GIC (Cortex-A interrupt controller), NVIC (Cortex-M interrupt controller), L2 controllers, UARTs, etc will all come with an AMBA type interface. 3rd party companies (ChipIdea USB, etc) may also make logic that is setup for a specific ARM bus.
Note AMBA at Wikipedia documents several bus types.
APB - a lower speed peripheral bus; sort of like south bridge.
AHB - several versions (older north bridge).
AXI - a newer multi-CPU (master) high speed bus. Example NIC301.
ACE - an AXI extension.
A single CPU/core may have one, two, or more master connection to an AXI bus. There maybe multiple cores attached to the AXI bus. The load/store and instruction fetch units of a core can use the multiple ports to dispatch requests to separate slaves. The SOC vendor will balance the number of ports with expected memory bandwidth needs. GPUs are also often connected to the AXI BUS along with DDR slaves.
It is true that there is no 100% standard topology; especially if you consider all possible future ARM designs. However, typical topologies will include a top level AXI with some AHB peripherals attached. One or multiple 2nd level APB (buses) will provide access to low speed peripherals. Not every SOC vendor wants to spend time to redesign peripherals and the older AHB interface speeds maybe quite fine for a device.
Your question is tagged embedded-linux. For the most part Linux just needs to know the physical addresses. On occasion, the peripheral BUS controllers may need configuration. For instance, an APB may be configure to allow or disallow user mode. This configuration could be locked at boot time. Generally, Linux doesn't care too much about the bus structure directly. Programmers may have coded a driver with knowledge of the structure (like IRAM is fasters, etc).
Still, my question is how does the OS knows how to address them?
Older Linux kernels put these definitions in a machine file and passed a platform resource structure including interrupt number, and the physical address of a register bank. In newer Linux versions, this information is included with Open Firmware or device tree files.
Specifically I'm asking what component is responsible on allocating addresses to peripheral devices and MMIO addresses?
The physical addresses are set by the SOC manufacturer. Linux platform support will use the MMU to map them as non-cacheable to some un-used range. Often the physical addresses may be very sparse so the virtual remapping pack more densely. Each one incurs a TLB hit (MMU cache).
Here is a sample SOC bus structure using AXI with a Cortex-M and Cortex-A connected.
The PBRIDGE components are APB bridges and it is connected in a star topology. As others suggests, you need to look a your particular SOC documentation for specifics. However, if you have no SOC and are trying to understand ARM generally, some of the information above will help you, no matter what SOC you have.
1) ARM does not make chips, they make IP that is sold to chip vendors who make chips. 2) yes the amba/axi bus is the interface from ARM to the world. But that is on chip, so it is up to the chip vendor to decide what to hook up to it. Within a chip vendor you may find standards or habits, those standards or habits may be that for a family of parts the same peripherals may be find at the same addresses (same uart peripheral, same spi peripheral, clock tree, etc). And of course sometimes the same peripheral at different addresses in the family and sometimes there is no consistency. In the intel x86 world intel makes the processors they have historically made many of the peripherals be they individual parts to super I/O parts to north and south bridges to being in the same package. Intels processor success lies primarily in reverse compatibility so you can still access a clone uart at the same address that you could access it on your original ibm pc. When you have various chip vendors you simply cannot do that, arm does not incorporate the peripherals for the most part, so getting the vendors to agree on stuff simply will not happen. This has driven folks crazy yes, and linux is in a constant state of emergency with arm since it rarely if ever works on any platform. The additions tend to be specific to one chip or vendor or nuance not caring to check that the addition is in the wrong place or the workaround or whatever does not apply everywhere and should not be applied everywhere. The cortex-ms have taken a small step, before the arm7tdmi you had the freedom to use whatever address space you wanted for anything. The cortex-m has divided the space up into some major chunks along with some internal addresses (not just the cortex-ms this is true on a number of the cores). But beyond a system timer and maybe a interrupt controller it is still up to the chip vendor. The x86 reverse compatibility habits extend beyond intel so pcs have a lot of consistency across motherboard vendors (partly driven by software that they want to run on their system namely windows). Embedded in general be it arm or mips or whomever puts stuff wherever and the software simply adapts so embedded/phone software the work is on the developer to select the right drivers and adjust physical addresses, etc.
AMBA/AXI is simply the bus standard like wishbone or isa or pci, usb, etc. It defines how to interface to the arm core the processor from arm, this is basically on chip, the chip vendor then adds or buys from someone IP to bridge the amba/axi bus to pci or usb or dram or flash, etc, on chip or off is their choice it is their product. Other than perhaps a few large chunks the chip vendor is free to define the address space, and certainly free to define what peripherals and where. They dont have to use the same usb IP or dram IP as anyone else.
Is the arm at the center? Well with your smart phone processors you tend to have a graphics coprocessor, so then you have to ask who owns the world the arm, the gpu, or someone else? In the case of the raspberry pi which is to some extent one of these flavor of processors albeit older and slower now, the gpu appears to be the center of the world and the arm is a side fixture that has to time share on the gpu's bus, who knows what the protocol/architecture of that bus is, the arm is axi of course but is the whole chip or does the bridge from the arm to gpu side also switch to some other bus protocol? The point being is the answer to your question is no there is no rule there is no standard sometimes the arm is at the center sometimes it isnt. Up to the chip and board vendors.
not interested in terminology maybe someone else will answer, but I would say outside an elementary sim you wont have just one peripheral (okay I will use that term for generic stuff the processor accesses) tied to the amba/axi bus. You need a first level amba/axi interface that then divides up the address space per your design, and then using amba/axi or whatever bus protocol you want (generally you adapt to the interface for the purchased or designed IP). You, the chip vendor decides on the address space. You the programmer, has to read the documentation from the chip vendor or also board vendor to find the physical address space for each thing you want to talk to and you compile that knowledge into your operating system or application per the rules of that software or build system.
This is not unique to arm based systems you have the same problem with mips and powerpc and other cores you can buy in ip form, for whatever reason arm has dominated the world (there are many arm processors in or outside your computer for every x86 you own, x86 processors are extremely low volume compared to arm based). Like Gates had a desktop in every home, a long time ago ARM had a "touch an ARM once a day" type of a thing to push their product and now most things with a power switch and in particular with a battery has an arm in it somewhere. Which is a nightmare for developers because there are so many arm cores now with nuances and every chip vendor and every family and sometimes members within a family are different so as a developer you simply have to adapt, write your stuff in a modular form, mix and match modules, change addresses, etc. Making one binary like windows does for example that runs everywhere, is not in any way a wise goal for arm based products. Make the modules portable and build the modules per target.
Each SoC will be designed to have its own (possibly configurable) memory map. You will need to read the relevant technical reference manual to get the exact details.
Examples are:
Raspeberry pi datasheet (pdf)
OMAP 5 TRM

Programming for Embedded System vs Device Drivers [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What is the difference between programming for embedded systems vs device drivers? Both areas deal with making the hardware do a specific task. I would appreciate an explanation. I have knowledge of C and i would like to go a bit deeper dealing with the hardware.
What is the difference between programming for embedded systems vs device drivers?
Writing a Device Driver means a very specific thing: writing low-level code that runs at elevated privilege in the kernel. It's quite tricky, but if your hardware is similar enough to existing hardware, you can sometimes "get by" by copying an existing driver and making a few changes. Writing a driver from scratch involves knowing the a lot about the kernel. Device Drivers are only written in C.
Writing for an "Embedded system" isn't very specific. Generally, it means "programming on a computer with fewer resources than a desktop PC, and maybe special hardware". There is no real line between "embedded computer" and "general purpose computer".
Everyone would agree that an 8-bit system with 128 bytes of RAM is "embedded programming" (Arduino). But the Rasberry PI (with GBs of RAM, hard drives, HDMI display) can be considered embedded or not depending on your view. If you unplug the monitor and put it on a robot, more people would say it requires embedded programming. People sometimes call programming apps for phones "embedded programming", but generally they call it "mobile" instead.
Embedded systems can be programmed in high level languages like Ruby/Python, or even shell scripts.
What are some purposes to programming device drivers
Well, any time you have a hardware device. These days, we have FUSE and USBLib, which blur the line. But if you want your wifi/webcam/usb port to be recognized by the OS, it needs a driver.
What cant you do programming wise for embedded systems that you can programming device drivers and vise versa?
As I said, embedded systems sometimes contain bash scripts (i.e. my home router).
I'm confused because they both deal with programming for hardware specifically on a low level.
There is some overlap, but they are quite distinct.
Embedded is an adjective that describes the whole system, while 'driver' refers to one specific tiny part of the system. You can do driver programming without doing embedded (i.e. writing a driver for a webcam on your desktop), and you can do embedded programming without writing new kernel drivers. (i.e. no need to write drivers if all your hardware is supported by the kernel.)
If i wanted to create a robot would this be under embedded systems or device drivers?
On-board robotic systems are usually embedded programming. It gets fuzzy if you strap a laptop to your robot -- people might say that's not embedded anymore, since it's a desktop OS. (Embedded systems rarely have a GUI, and if they do, it's rarely a mainstream one.)
Your robot may or may not require writing new drivers. Maybe the motor can be turned on from user space, so you don't need a driver. On the other hand, there are times where you need the extra features found only in the kernel: Faster response times, access control, etc. For example, if your program dies, it might leave the motor running, and that's bad. So you can write a kernel driver that will clean up for your program when the program exits. It's a little bit more work up front, but can make development simpler down the road.
What about making the GPU of a PC work for that O.S.? Would that be device drivers? If the hardware is stand alone without OSthen it is embedded?
Yes. Writing a GPU driver is writing kernel device driver code. (it's fuzzy these days because of libraries, but whatever.) If you wrote it on embedded hardware, you can call it both device driver and embedded programming.
The way you have posed the question the answer is there is no difference. you have asked what is the difference between an apple and an apple? None.
Now if you are wanting to say compare bare metal and linux device drivers? Well the linux device drivers have a lot of operating system api calls you have to make and have to conform to that sandbox, so there is a lot of work there on top of the poking and peeking of registers and memory of the various peripherals. If you go bare metal (no operating system) then you can do pretty much anything you want, you can create more work for yourself than a (linux) device driver or you could create less work for yourself.
You can go to the depth of a device driver, or all the way to bare metal it is your choice. As far as the peripheral is concerned the stuff you have to do to it or with it will be similar, the differences will have to do with dealing with the operating system vs dealing without an operating system.
Maybe you should pick a task and do that, something like send a byte out a serial port is a reasonable statement. Putting a pixel on a display (raspberry pi is an exception), anything graphics, anything usb, is not a reasonable statement, there is a considerable amount of overhead and knowledge and experience you would need before doing that. Blinking an led (basic gpio) reading a button, and uart tx and rx are generally where you get your feet wet with bare metal. Granted tty/uart stuff on linux is far from beginner stuff so you really just have to start trying things and failing and get up and try something else and see where that takes you. fortunately there are tons of simulators out there so you can do all of these things using free everything, simulators, toolchains, etc.

What's the point of a Linux character device driver if you can just use outb/inb from userspace? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm having a hard time understand when I should write a device driver instead of just sending opcodes directly to the hardware via outb from my userspace programs. I initially figured that I should create simple routines for the hardware, but now I'm starting to think that algorithms should stay in the userspace.
Suppose I'm programming a hypothetical robotic arm. I could write several functions in a Linux kernel module that would automate the hardware outputs needed for common tasks (e.g. move arm to HOME position, pickup new block from known location at start of assembly line, etc.). However, after reading more about device drivers, it seems that rule of thumb is to keep the device driver as close to hardware-specific code as possible, leaving the "heavy lifting" algorithms to userspace.
This confuses me, since if the only functions implemented by the device drivers are simple opcode calls, what's the reason for a userspace program to use the device file instead of calling outb/inb directly?
I suppose what I'm trying to figure out is: how do I decide what functionality goes in kernelspace instead of userspace?
Good question. I've wrestled with that - I've even written drivers to control robotic arms, when I knew for a fact it was not necessary. I could just as easily send commands thru a serial port, or outb(), etc. I only wrote those drivers for educational purposes.
There are a lot of good reasons for a device driver. Imagine trying to control your network card directly from userspace! First of all, a driver gives you a nice abstraction at the OS level (eth0, etc). But trying to handle interrupts for packet send/receive in userspace would be wildly impractical from a performance standpoint - maybe even impossible. Just the time it takes to respond to an interrupt in userspace would drag the interface to its knees.
Imagine further that you bought a new network card. Wouldn't it be nice to just load the new driver and continue talking to eth0 from userspace with no changes to your code?
So, I would say "there is no point" in writing a driver if you don't see the need. I think the existence of drivers is driven by the need (as in the NIC driver example), not the other way around.
It sounds like for your app, outb() is going to be much more straightforward than creating a driver. In the end, I didn't even use my robotic arm drivers - just writing bytes to the serial port worked just as well - and only required a few lines of code ;-)
If you use outb and inb in userspace, then your userspace will be x86-specific - the userspace outb() and inb() macros are implemented with x86 assembly. On the other hand, if you write a kernel driver than your driver will work on any PCI-supporting architecture - the inb() and outb() functions in the kernel are implemented in an architecture-specific manner. The kernel also gives you functions like request_region() to ensure that your IO ports don't clash with any other driver.
Furthermore, your userspace driver would need to run as root (or technically with the CAP_SYS_RAWIO capability, which is root-equivalent). A character device driver in the kernel would mean that you can use the UNIX permissions on the character device file to control which userspace user can access the device.
A device driver must implement only the mechanism to handle the hardware (It does not matter the operating system).
All the intelligence of a solution must live in user-space.
Yes, you can do everything in user-space but:
it is not re-usable; other user-space programs must re-implement the mechanism to gain access to the robotic arm (for example)
bad performance; it depends on the application, maybe it is not a problem for a robotic arm (is slow), but it can be a problem for a network card, a disk, graphic card
So, for a robotic arm, you should implement the mechanism in the driver (move motor, get information from sensors). So, your program and other program can use the driver to make something clever with the arm. The clever stuff is done by the user-space program: paint the Gioconda, prepare a cake, to move dynamite carefully. The driver is the implementation of basic functions to allow its users to use the hardware.
But obviously, it depends on the hardware and context.

Why an Operating System(OS) is called as hardware dependent/platform dependent [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
Why we are saying that the OS is purely hardware dependent (other than hardware peripherals like RAM/USB etc)?
The word hardware independence means, the OS should run on any platform with out any underlying hardware abstraction layer like ARM/x86/xtensa/starcore etc etc.
Can you please give me the exact hardware dependencies in a simple/common OS? Meaning exactly in which are all points in the OS is accessing the hardware or depending on the platform?
Also is it possible to write a simple OS or a RTOS (using C language) with out any hardware or platform dependency(ie without any VM concept) so that it'll run on any platforms?
I would be expecting the answers from the OS kernel side and not from the peripheral side like RAM /keyboard/mouse
I will give you an example of exact hardware dependency in an OS "at context switching (context of the tasks/threads should be stored with help of underlying platform/CPU only)"
__Kanu
In general, the following things are hardware dependent:
System startup/reset
Interrupt handling
Virtual memory management & protection
Device I/O
System-level protections for code access and security
Some mutual exclusion primitives.
At some level, way way down, an OS kernel needs to sit on top of something. Most kernels are written such that they touch the hardware with as small a surface area as possible, but there must be some touch point there.
You can write most of a kernel in C (this is usually the case). But you'll need to run on top of something. If you fudge with the definition of an OS a little bit, you could have a "microkernel" that is hardware-dependent, and build many of the above as abstractions as a toy OS on top of it, but you'd suffer in performance/accuracy/sophistication.
Any operating system is at least depending on one piece of hardware: the CPU. There are different CPUs, each working differently and having a different "native language". Since an OS is "just a program" which needs to run on the CPU, it must be written in the CPUs native language is thus dependent on it. You cannot run a normal Windows on an ARM or PowerPC processor, for example. It only runs on Intel-compatible CPUs.
It is possible to write an OS that can be compiled for different CPUs and run on them, most UNIXes like Linux, FreeBSD, etc. are good examples. But the need to be compiled ("translated") for each CPU they want to run on.
Apart from the CPU, an OS also needs ways to process something, so it needs input and output like a hard-disk or ROM, a screen and a keyboard (but not necessarily; e.g. an elevator has no need for a real keyboard and often doesn't even need a screen). And there are various different ways to access each of these devices and the OS depends on these methods (for example, bus systems like the PCI bus, or dedicated chips like a 16550 for serial ports).
If an OS had no hardware dependancies, how could it received input from the output world, and output back to them the results?
Every point where input and output occur is hardware dependent.
Every point where interrupts come into play is hardware dependent.
Every point where memory itself is managed is hardware dependent.
In other words, if you care about it, it's probably hardware dependent.
Man I like embedded systems.
Pretty much everything about an OS is hardware dependent in some way, from memory management to timers (scheduling) to networking to video to keyboard to BIOS. All of this will require hardware-dependent C code and/or assembly.
That doesn't mean you can't extract out a lot of common C code which is shared between architectures. Linux is a classic example of this. It has been ported to a vast array of hardware platforms, requiring custom code for each platform. However, there's still a large body of shared C code (e.g. filesystem drivers).
And of course, even the parts that are ANSI C only run on your hardware if your compiler can target it.

Any open-source ARM7 emulators suitable for linking with C?

I have an open-source Atari 2600 emulator (Z26), and I'd like to add support for cartridges containing an embedded ARM processor (NXP 21xx family). The idea would be to simulate the 6507 until it tries to read or write a byte of memory (which it will do every 841ns). If the 6507 performs a write, put the address and data on some of the ARM's I/O ports and let the ARM code run 20 cycles, confirm that the ARM is floating its data bus, and let the ARM run for another 38 cycles. If the 6507 performs a read, put the address on the ARM's I/O ports, let the ARM run 38 cycles, grab the data from the ARM's I/O port (hopefully the ARM software will have put it there), and let the ARM run another 20 cycles.
The ARM7 seems pretty straightforward to implement; I don't need to simulate a whole lot of hardware features. Any thoughts?
Edit
What I have in mind would be a routine that would take as a parameter a struct holding the machine state and pointers to a memory access routine. When called, the routine would emulate the ARM's instruction engine, generating appropriate reads, writes, and code fetches. I could then write the memory access routine to regard appropriate areas as flash (with roughly-approximated wait states), RAM, I/O ports, and timer registers. Some other areas would be marked as don't-care, and accesses to any other areas would flag an error and stop the emulator.
Perhaps QEMU uses such a thing internally. Since the ARM emulation would be integrated into an already-existing emulation engine (which I didn't write and don't fully understand--the only parts of Z26 I've patched have been the memory read/write logic) I would need something with a fairly small footprint.
Any idea how QEMU works inside? Any idea what the GPL licence would require if I just use 2% of the code in QEMU--whether I'd have to bundle the code for the whole thing, or just the part that I use, or what?
Try QEMU.
With some work, you can make my emulator do what you want. It was written for ARM920, and the Thumb instruction set isn't done yet. Neither is the MMU/cache interface. Also, it's slow because it is an interpreter. On the bright side, it's all written in C99.
http://code.google.com/p/gp2xemu/
I haven't worked on it for a while (The svn trunk is 2 years old), but if you're going to use the code, I'll be glad to help you out with the missing features. It is licensed under MIT, so it's just the same as the broad BSD license.

Resources