Running ARM TrustZone Secure/Normal world"example on the ZedBoard - arm

Does anyone know how to implement the example of TrustZone running "Secure world" and "Normal world" given on the ARM documentation website below on the ZedBoard? Any documentation on this subject (running TrustZone on the ZedBoard) would be also helpful.
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka15417.html
The ZedBoard has a Xilinx : Zynq® -7000 All Programmable SoC Dual ARM® Cortex™-A9 MPCore™. More information on the ZedBoard can be found here:
http://www.zedboard.org/content/overview

This is a broad topic. Hopefully some of the following information will help.
First off just to de-jargon a little, SOC == system-on-a-chip.
Digilent, the board's manufacturer, has some support files for your board if you have access to the Xilinx toolchain. So first, if you go to http://www.digilentinc.com/Products/Detail.cfm?NavPath=2,400,1028&Prod=ZEDBOARD , at the bottom, you will find two files named "Linux Hardware Design for ISE" <version number>.
Also assuming you are using the Xilinx development tools, if you browse to Xilinx/<Version Number>/ISE_DS/EDK/hw/XilinxProcessorIPLib/pcores/axi_interconnect_v1_06_a/doc/ds768_axi_interconnect.pdf , you will find information on the AXI interrupt controller your board uses. This includes the fact that it supports TrustZone and some information on actually using it.
Next, if you go to http://zedboard.org/content/creating-custom-peripheral, you will find some instructions on making a "peripheral" device. I put this in quotes because the device in fact exists completely within the programmable logic; it's not something you plug into the micro usb port or what you'd traditionally think of as a "peripheral".
At the end of the tutorial, there is also a link that will help you read data from your peripheral.
If you repeat all those steps with the system.xmp file included in the zip you initially downloaded, then you'll notice all the heavy lifting has been done for you. You have a plugged in and ready to go interrupt controller on the AXI bus already that works with TrustZone, just ready and waiting for you to hook a little hello world device to it.
But what are you going to do with that hello world device? If you look in the assembly for the tutorial you linked to, you'll see in the comments they talk a lot about something called the "Secure Configuration Register". If you look in your processor's documentation (in the resources section here, http://www.arm.com/products/processors/cortex-a/cortex-a9.php) and search for the term "TrustZone extensions" (currently page 34 although obviously that's subject to change), you'll find a link to another page detailing this register. This is the same register they use in the tutorial, so in theory, if you have a trusted execution environment set up, you can now make the hello world tutorial work (mostly; you're going to likely want to do what they do in assembly with either vhdl or verilog code and just expose the results somewhere easy to read in C).
Now everything I have just mentioned will merely get you access to the TrustZone data in the AXI bus. In order to do anything interesting with this, you are going to have to actually create a secure world and normal world to read from. Otherwise any demo you put together will merely print "Hello from Secure World" (or function incorrectly). So this is where unzipping that tutorial you linked to and really reading their source will pay dividends.
Although my answer up until now is also incomplete, as the Hello World tutorial you linked isn't designed to teach you how to create Normal World (and possibly Monitor World) to begin with. Which it says explicitly in the ReadMe.txt . So reading the source won't help you with that. For that, you're going to need the link http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.prd29-genc-009492c/index.html . There's a lot of info there but it's intended as a reference and the first two chapters, in my humble opinion, are just what I like to call "skipable flavor text". Although if you do have time to waste some of it is fascinating and informative as far as security theory in general. Chapter 3 will begin to teach you how to develop for TrustZone.
But hopefully the information I provided will turn this into less of a permissions problem for you and into more of an education problem. I'm still educating myself on this stuff.

Related

Can-bus driver of hcs12

Where can I find the following missing functions here in this HCS12 CAN driver?
CANFifo_Init()
CANFifo_Get(msg)
CANFifo_Put(msgPtr)
The physical book Embedded Microcomputer Systems: Real Time Interfacing included a bound-in CD-ROM. I imagine the code was on that perhaps. Without access to the full text of the book or the CD-ROM content it is difficult to tell. Even on the authors own website, the link to the teaching materials is broken.
If you are doing this course, the course provider should provide all necessary materials. If you are just wanting to write production CAN bus code for a specific part, I would not start with academic course material.

Run executable on MINI2440 with NO OS

I have Fedora installed on my PC and I have a Friendly ARM Mini2440 board. I have successfully installed Linux kernel and everything is working. Now I have some image processing program, which I want to run on the board without OS. The only process running on board should be my program. And in that program how can I access the on board camera to take image from, and serial port to send output to the PC.
You're talking about what is often called a bare-metal environment. Google can help you, for example here. In a bare-metal environment you have to have a good understanding of your hardware because you have to take care of a lot of things that the OS normally handles.
I've been working (off and on) on bare-metal support for my ELLCC cross development tool-chain. I have the ARM implementation pretty far along but there is still quite a bit of work to do. I have written about some of my experiences on my blog.
First off, you have to get your program started. You'll need to write some start-up code, usually in assembly, to handle the initialization of the processor as it comes out of reset (or is powered on). The start-up code then typically passes control to code written in C that ultimately directly or indirectly calls your main() function. Getting to main() is a huge step in your bare-metal adventure!
Next, you need to decide how to support your hardware's I/O devices which in your case include the camera and serial port. How much of the standard C (or C++) library does your image processing require? You might need to add some support for functions like printf() or malloc() that normally need some kind of OS support. A simple "hello world" would be a good thing to try next.
ELLCC has examples of various levels of ARM bare-metal in the examples directory. They range from a simple main() up to and including MMU and TCP/IP support. The source for all of it can be browsed here.
I started writing this before I left for work this morning and didn't have time to finish. Both dwelch and Clifford had good suggestions. A bootloader might make your job a lot simpler and documentation on your hardware is crucial.
First you must realise that without an OS, you are responsible for bringing the board up from reset including configuring the PLL and SDRAM, and also for the driver code for every device on the board you wish to use. To do that required adequate documentation of the board and it devices.
It is possible that you can use the existing bootloader to configure the core and SDRAM, but that may not meet your requirement for the only process running on the board should be your image processing program.
Additionally you will need some means of loading and bootstrapping; again the existing Linux bootstrapper may suit.
It is by no means straightforward and cannot really be described in detail here.

How to use I2C on a TM4C123GXL (TivaC) Launchpad

I am attempting to connect my Launchpad device to the Pololu MinIMU9v2 9DoF sensor via the I2C bus. I am working in a Linux environment, compiling with arm-none-eabi-gcc, and I have downloaded the sw-ek-tm4c123gxl zip file from the Texas Instruments website.
In digging through the drivers download, I identified a folder (examples/peripherals/i2c) that contains 3 demonstrations (each in a single C file) for using the I2C bus. One runs the Launchpad as a slave, the next configures it as a loopback, and finally, there is one that interfaces the Launchpad with an Atmel I2C-based memory device using what it refers to as "soft I2c".
I'm assuming that the "soft" part of this means that it's software based, utilizing interrupts and all. I'm looking for a simpler solution, preferably without interrupts. The loopback example worked like a charm, for instance, but in modifying it, I can't seem to get it to communicate with the MinIMU9, no matter what I try. The documentation for the MinIMU9 is pretty clear, but I think I'm just lacking an understanding on how to use this driver software.
Finally, I don't want to reinvent the wheel, but I can't seem to find any one else talking about I2C and the stellaris or tivac launchpads. Am I way off the mark in trying to implement this in this way? If not, is there an easier way to go about this? And, if not, where can I learn more about whatever it is that I'm missing?
I was able to figure it out after all. One thing that I hadn't noticed in the first place is that Texas Instruments provides a PDF peripherals resource that discusses the usage of their driver library. Unfortunately, this documentation is far from comprehensive, and probably would have still required someone to bury themselves in the code were it not for the examples.
Now, the peripheral examples required a bit of work to get going. To save some time and effort, I tried copying master_slave_loopback.c over the top of examples/project/project.c, modified it according to the comments within the file, and then I was able to compile the example and run it with immediate success.
Next, I attempted to convert the new project.c file into something that would allow me to communicate with the MiniIMU9v2. Everything seemed straight forward for the most part. I commented out anything that looked like it was related to loopback functionality, but when I would try to execute the program, it would hang on the following line:
while(I2CMasterBusy(I2C0_BASE)) { }
Overwhelmed with what I might have to do to begin troubleshooting this, I decided to post this question. Fortunately, the problem was a lot simpler and surprising to solve than I had suspected. A quick search revealed this page: http://e2e.ti.com/support/microcontrollers/tiva_arm/f/908/t/316580.aspx
I changed:
GPIOPinTypeI2C(GPIO_PORTB_BASE, GPIO_PIN_2 | GPIO_PIN_3);
To:
GPIOPinTypeI2C(GPIO_PORTB_BASE, GPIO_PIN_3);
And this solved my problem.

Custom Kernel: Implement filesystem

As a out of course project, I am currently developing a kernel in an attempt to better understand all the aspects of an actual OS. So far, I am done setting up a flat physical memory model with support for paging and the basic interrupts (keyboard and perhaps trackpad/mouse next). I thought the step forward would be to implement a filesystem and I am keen about the ext2. I have looked around, even on SO but there isn't anything explicit that answers my questions:
Is it possible to write a driver to access an ext2 filesystem in C or do I need to go lower?
If I plan to access the filesystem off a USB device, I am assuming I will need to get the device driver for USB running first. Any help on this would be greatly appreciated.
I know the code for detecting a filesystem is already available on the MINIX and other kernels but what I really want to know is if I want to build a custom albeit simple filesystem, how do I go about it? I am considering this possibility too.
My apologies if the question and details sound a little ignorant but I am still in the learning process.
Thanks :)
I'll try to give you a few tips/hints - a clear answer isn't that simple:
An ext2 filesystem written in C is just C. C is just a programming language - you can use C++, plain assembly or a few others (A few os'dever use D) - but not a "managed" language etc. But it is important that you have a rock solid understanding of this language. In my opinion assembly is a MUST (Take a look at the scheduler in an operating system -> plain assembly)
Do you really want to write an USB driver ? It isn't "just" a simple USB driver (Layer of abstractions). Why a USB driver and not a floppy disk or CD driver (Believe me - a floppy driver in 32 bit protected mode isn't that hard) ?
Please focus on your project. Of course Linux (Early versions) and Minix have example code, but take care of the design structure (Monolithic/Microkernel or hybrid-kernel) - and don't mix it, write your own code.
Please make on step after the other. You wrote a basic IRQ handling and the plan is to write a keyboard/mouse driver - write the keyboard driver ! Don't dream about loading and executing files (Rom wasn't built in one day).
You have to read documentations, for example the Intel manuals or other "books". A very popular forum is osdev.org - take a look at the wiki. As twalberg said, it's a very huge module - stay focused on the main parts of your operating system.
I know, this is not the answer to your question - but it's important not to go in the wrong direction and dream of a fancy UI or something like this ;)
osdev.org forum
osdev.org wiki
Intel manuals
And a few other books in my book shelf can you find here (Tanenbaum, Silberschatz with Peter Galvin - great books!):
Books

How to create real-life robots?

Even before I learnt programming I've been fascinated with how robots could work. Now I know how the underlying programming instructions would be written, but what I don't understand is how those intructions are followed by the robot.
For example, if I wrote this code:
object=Robot.ScanSurroundings(300,400);
if (Objects.isEatable(object))
{
Robot.moveLeftArm(300,400);
Robot.pickObject(object);
}
How would this program be followed by the CPU in a way that would make the robot do the physical action of looking to the left, moving his arm, and such? Is it done primarily in binary language/ASM?
Lastly, where would i go if I wanted to learn how to create a robot?
In the end, something has to break down the high level commands into very low level commands. Something has to translate "Pick up the cup" to how to move the arm (what angles the joints should be at) to the hardware commands which actually turn the motors.
There are frameworks which try to provide some amount of this translation, including (but not limited to):
Player/Stage
Microsoft Robotics Studio
Carmen
CLARAty
Lego Mindstorms
However, since robotics research is interested in every layer of the system, there aren't many systems which provide the entire translation stack. If you're looking into getting into robotics, there are several systems which attempt to make this easier (again, a random sample):
Lego Mindstorms
TeRK
VEX Robotics
Failing that, sites such as Make even provide guides to building robot projects to start from. The challenge is find a project which you are excited about, and go to town!
You should check out Microsoft Robotics Studio (MRS). They have many videos/screencasts, and written tutorials. Additionally, Channel9 has many videos, interviews, etc, on the robitics subject. Including demonstrations, and interviews with developers of MRS.
In most modern robots you would have an Inverse Kinematic model of the mechanism, in this case the arm, that converts the spatial coordinates into positions for the joints of the arm. These joints are usually moved by servo motors. To smoothly move the arm, you need a series of intermediate joint positions defining the path you want the arm to follow. You also have to worry about the velocities of the joints, which together control the speed of the "hand" at the end of the arm.
While the arm is moving your servo system will be getting feedback about its actual position. Simple servo systems may use a basic PID feedback loop to adjust the motors. More complex systems will include feed-forward parameters which compensate for inertia, gravity, friction, and so on. These can become very sophisticated.
The real fun starts when you have to allow for obstacles in the space around the robot. You have to sense the obstacle and figure out how to avoid it and still reach the destination.
I just have to add something about Arduino projects to this because I dont see it mentioned above.
There is a very low bar for entry into the Arduino based robotics projects. The "sketch" programs that you write for the hardware are very easy to pick up and similar to C syntax. If you dont know your transistors from resistors these boards still allow you to do alot with plug-in hardware and additional "shields" that extend the base computer board.
Its very fun, very flexible and something to get your code interacting with the real world. Plus its "Open Hardware" very along the lines of open source software.
Robots will work by interacting with hardware. The bridge from your code is often done through different type of I/O ports. It could simply be a RS232 cable for example (you know those old COM1 ports). Hardware parts will be composed by motors (such as servo motors) and sensors (such as ultrasound to feel obstacles, lasers to get distance or switches).
You don't need to use assembler to do that, there are lots of languages (if not most) that can do it but it requires knowledge on how to interact with hardware. Like writing a driver. It requires at least basic electronics also if you want to build the robot yourself.
If you're interested, I suggest you have a look at this book which is a good primer.
Also, you could try out programming a Basic stamp, it's pretty easy following the tutorials and it will give you a good start on how to build robots. It's not too expensive and you'll be interacting with hardware in no time.
Good luck and have fun!
If you get good enough at programming, you may discover that you don't even actually need a robot to test much of the hardest code you'll need to write... (IE, making a robot see and recognize a scene always fascinated me... But at some point, I realized that the physical robot required for this problem is the easy part... The software is the hard part!)...
Is probably easier to get a more high-level language to describe the robot's behaviors and intelligence and let the low level language to the actions (move arm, walk, stop). There is a lot of research in what is called BDI architecture for intelligent agents, google for it.
You can find more about at this site, it's a DSL for describing agent behavior made in Java. It's called Jason interpreter and the language is AgentSpeak(L).
Find a local FIRST robotics team and volunteer to be a mentor. FIRST is a robotics competition for middle and high school kids. The goal is that the kids do all of the work to build, program, test, and run the robot, but you still will have lots of opportunities to dig in and really learn the software. They are using LabView by National Instruments, and, as of Feb 8, have just begun regional competition for this year. LabView is a graphical programming environment that interfaces with NI hardware to let you program motors, actuators, and sensors. The NI stuff is pretty slick and is pretty easy to use, plus it's provided free to each team, so you don't have to buy the hardware and software yourself (at least to get started.) Plus, you get the added bonus of helping a new generation of engineers get their start.
You would have to have a driver that interfaced with the hardware (most likely a STAMP or FPGA with motors etc...). You would then call the function me.moveLeftArm(x,y); and the driver would know that moveLeftArm() means to move an actuator for X seconds/milliseconds/degrees.
I'm sure that you could find a kit that does robot programming.
If you want a Java alternative, I can recommend the book Linux Robotics. It has a lot of good information about where to get kits, parts, and sensors, as well as complete source code listings in Java.
I share the same itch .. I'm about to buy my first Beagle Board and some sensors / servos that can use the I2C bus. I'm going to be using an event driven design and a crude implementation of fibers (fibrils, if you will) which are userspace threads.
Basically my design calls for one process, which launches one thread per group of servos. Each group manager thread will launch x # of fibrils, 2 per servo (likely). One fibril is used to control the servo, the other fibril handles events from that servo (i.e., an object is just too heavy to pick up, an object was dropped, etc).
The main process has the task of listening for events from everything else and making sure the 'right hand knows what the left hand is doing' while moving forward and negotiating obstacles.
Its going to take me the better part of two years to get something working to the point that I'm proud of it .. but I anticipate many enjoyable evenings getting it to that point.
I will very likely be using a Microkernel, not Linux.
I'm doing this as much to sharpen myself with event driven methods as well as my desire to make my own R2 :)
Start with Phidets if you are familiar with .Net. You can checkout TrossenRobotics.com for parts.
The Phidgets interface kit is a good place to start. From there you can get a servo controller and start building things that move.
The Trossen forums are also a good place to review other people's projects. They have a new Data Center with code/project samples too. I don't work for them...just a happy customer.
lots of good answers here. your piece of fantasy code is not far from how you'd do in a higher level language such as C# over MS Robotics Studio. Just keep in mind that even simple things (like "move arm left") are very loaded with "information bias".
down to the metal, a robotic arm is a set of links and [possibly] motorized joints. Therefore "move arm left" (or any point in coordinate) is already a very complex task to compute (look for D&H Table, forward and inverse kinematics for manipulators).
There's also the concept that move arm left assumes there's nothing in that space and a collision won't occur. If the environment is unconstrained, then you need to implement a collision detection system, often based on some sort of sensor (camera) and machine vision algorithms.
In summary, the language and the hardware interfacing are often trivial compared to modeling the system to achieve the desired behavior.
Regarding the last question "how to create a robot", I find starting from looking for a related project in online communities like [Adafruit][1], [Hackster.io][2], or even [glitch][3], or looking for blog posts of someone who have built a robot from scratch, e.g., https://burningservos.com, or a product that provides documentations & tutorials for both hardware and software e.g., http://emanual.robotis.com/docs/en/platform/openmanipulator_x/overview/.

Resources