How to change boot priority/settings on Android phone(if possible) - arm

Does anybody know if Android phone's firmware has the option to boot from other media besides system on a chip? I assembled compatible ARM-machine code and placed it on an SD card and would like to know if it is possible to boot my small machine instructions this way.

Booting from a different medium in fact has nothing to do with android, but rather the primary and possibly secondary boot-loaders. Google doesn't mandate any particular booting arrangement, and its very much up to the phone vendor.
The TI OMAP family of SoCs has a ROM-based first stage boot-loader that can bring up the media card interface and boot from a FAT formatted media card - although the search order and availability of boot devices is controlled by pull-up resistors and your phone vendor might have disabled it. It is likely other families of SoCs have similar arrangements.
Even once you've got your code to boot the is the question of what to use as IO? Or did you intend to back to a replacement kernel?

Related

How to write data to a graphics card without using BIOS?

I want to make an (extremely simple) operating system. I am currently learning about graphics cards.
This is what I know so far (please correct me if I am wrong):
A graphics card has two modes: a text mode, and a graphics mode.
You can write data to a graphics cards using BIOS (instead of accessing the graphics card directly).
What I want to do is to write directly to the graphics card's video memory without using BIOS (because I want to understand how things work). So I have the following questions:
How do I know what is the base address of the video memory of the graphics card, is this
done by probing the PCI bus to get the base address, or is the base address fixed (just like the COM ports base addresses is fixed for example)?
Are all graphics cards accessed in the same way, or do I have to create device drivers for all available graphics cards?
Edit: I am using x86.
Introduction
Graphics cards are a very complex topic, I'm confident in saying that they are the most complex subsystem you'll find on a PC.
If you ever found yourself lost programming an XHCI (USB 3.0) or an old RTL8239A network interface card then be prepared because this is much more complex.
Graphics controllers are the products of a very competitive marketing - rarely a vendor opens the specifications and when it does, it gives an intentionally poor support.
If you add that the hardware itself deals with: codecs, audio (yes, audio streams too), 3D programmable pipelines, video signals and video outputs, surface formats, media formats, DMA and memory remapping then you can see that it is not an easy task to program a video card.
The better approach, in my opinion, is to "retrace the history" of the video cards.
Start from the MDA then move to CGA then EGA and finally to VGA.
The VGA legacy is still supported, the specifications can be found here or in the first part of this PDF from Intel.
You can program the VGA without the BIOS "easily" - meaning that it is an already well-known and documented hardware architecture (but not necessarily easy to configure).
I don't remember if the previous adapters were subsets of the VGA or not, if not they aren't supported anymore probably.
You can try with a virtual machine or an emulator.
When you are satisfied with the VGA you can move to the SVGA.
Here come the troubles: as Wikipedia confirms, the VGA was the last truly standardised video card/adapter interface:
Unlike VGA—a purely IBM-defined standard—Super VGA was never formally defined.
The organisation VESA standardised a BIOS API called Video BIOS Extensions to allow the use of SVGA cards to driverless OSes but that's not what you were looking for.
You can try reverse engineering a VBE BIOS but I think it will be a nightmare - a senseless stream of writes to IO ports and MMIOs.
Making sense of tenths of configuration registers without any reference is almost impossible.
Note that we are still talking about 1998 technology up to this point.
After the VESA VBE effort, no more standard interfaces have been published - the only reliable way to program a video card with less than 20 years is by signing an NDA with its vendor.
Luckily, recently (actually, not anymore), Intel entered the market with its Intel GFX (a.k.a. Intel HD Graphics) cards.
Intel never aimed to manufacture top-of-the-notch video cards, not even closely - so they can be open about their architecture since that's not their core business.
The result is this marvellous set of Programming Reference Manuals that describe the functionality of their video cards.
Complete with (traditionalistic) minimal information to program them.
In general, hobbyists stops before this point (at the SVGA checkpoint), because the hardware has become very complex and the efforts very huge.
For example, my Haswell integrated video card is documented with 17 PDFs of about 250 pages each (on average).
The display part is documented in a PDF on its own, the framebuffer has disappeared in favour of Display surface and the display part alone of the hardware is this:
While this may not be very comprehensible, it should suffice to get an idea of the numerous technology that a programmer must understand before programming a modern video card.
You can surely take a look at the Linux source code but beware that the Linux kernel is no usually of immediate understanding even for simple controllers - it is not a toy OS, it is a real OS with its own API and interface that must fit the hardware interface (actually the other way around).
Furthermore, only the Intel and AMD video drivers are really open source, the others are either proprietary or just a bunch of undocumented code.
Brief outline of common VGA modes programming
If you just want to program the VGA (a very respectable task indeed!) you can start by setting the video modes 03h (text mode) or 13h (graphics mode).
Video mode 03h
The frame buffer is at 0b8000h (physical address), usually accessed as 0b800h:0000h as it is handy to have a zero offset.
The screen is made up of 80x25 characters, each characters occupy a word (16-bit) in the frame buffer.
The low byte is the character code - the character map used will associate a glyph to a code (e.g. 41h to A).
The high order byte is the attribute byte - the low nibble is the foreground colour, the high nibble is the background colour.
More information can be found in the EGA/CGA/VGA links above.
Video mode 13h
It is a graphical mode with 320x200 pixel, the frame buffer is at 0a0000h (physical address) usually accessed as 0a000h:0000h for the same reason of above.
Each pixel is a single byte, the value of the byte selects the colour of the pixel.
The default palette can be changed by programming the DAC registers (3c7h, 3c8h, 3c9h for the VGA adapter).
Answers
A graphics card has two modes: a text mode, and a graphics mode.
Not necessarily, today this distinction may not exist anymore.
The MDA had only a text mode.
EGA, CGA and VGA and SVGA had both.
The modern approach is to draw the text, however during boot or during particular situations (e.g. BSOD) a basic video driver in text mode is used.
This driver probably uses a BIOS service since the video driver may not be available/reliable.
You can write data to a graphics cards using BIOS
Up to the SVGA era, then BIOS support was discontinued.
How do I know what is the base address of the video memory of the graphics card, is this done by probing the PCI bus to get the base address, or is the base address fixed (just like the COM ports base addresses is fixed for example)?
Video cards have been connected through the history to the ISA, PCI, AGP and PCIe buses.
Only the ISA bus wasn't configurable (at least not from the beginning), the others had configurable BARs (Base Address Registers) per function (the smallest addressable entity in the PCI bus).
In order to get the base address of the MMIO registers of a video card the PCI or PCIe bus must be enumerated and the standard registers in the configuration space must be read/set.
Dealing with PCIe is not as easy as dealing with PCI.
Note that not even the UARTs have a fixed address, they are configured by default to map to the legacy (3f8h, 2f8h, 3e8h and 2e8h) addresses but the hardware was (is?) in a SuperIO chip behind a PCI-to-LPC bridge that emulated a PCI-to-ISA bridge.
With the advent of the Intel platform hub architecture (i.e. the death of the north and south bridge) the SuperIO chip eventually made it into the PCH or moved behind the SPI controller.
Are all graphics cards accessed in the same way, or do I have to create device drivers for all available graphics cards?
Each graphic card is a beautiful vicious creature on its own.
A device driver is needed for each model.
Some driver can be reused for a whole family of models but this is not true in general.

Autoconfiguration on programmable Xbee modules

Non-programmable Xbee modules should be configured through a PC (with XCTU) or other devices like Arduino... but can the programmable xbee modules (like xbee-pro zb s2b) autoconfigure themselves, without being connected to another device like a PC or Arduino, by running code stored in their memory?
I mean, can they run orders like the ones you run through XCTU but programming them in the internal memory code? Like scan energy of every channel, select a channel, set a PAN ID, configure the different parameters of the device...
Thank you
Yes, the development kit includes an API for sending AT commands from the co-processor to the radio on those boards.
There's also a passthrough mode that relays the host computer serial port through to the radio processor, which can help with initial setup/configuration of the modules like you might do during manufacturing.
To answer your question:
I mean, can they run orders like the ones you run through XCTU but programming them in the internal memory code?
No. You can not program a sequence of orders/commands into the internal memory of the device. To do anything meaningful the device needs to be "driven" from a host PC or MCU that can send the AT Commands.
If you want 1 device solution that does not require a Host MCU then you will need to use a ZigBee SoC (System on Chip), such as the CC2538 - http://www.ti.com/product/cc2538 running a ZgBee SDK (Software Development Kit) - http://www.ti.com/tool/z-stack (ZStack-Home). However this will require you to develop the ZigBee application SW.
Regards,
TC.

Linux - How to upload code to a dedicated freescale chip NIC on my motherboard?

I have bought a Gigabyte g1.guerilla motherboard and the NIC is a dedicated freescale chip on the motherboard. It is connected to the PCI bus.
I am running Linux and unfortunately there is no driver for it. I am working to write one, however I am hitting a basic problem: How to communicate and upload code to its dedicated CPU-RAM?
Much help appreciated.
I am running on ubuntu and the chip is a mpc8308vmagd PowerQuicc II pro
I don't know anything about your specific motherboard or the processor, but are you totally sure you need to upload any code to the processor?
Usually, if a peripheral needs any code (firmware), it's already present on a ROM or a flash chip and you only need to touch it if you specifically want to write your own firmware for it. AFAIK the way it usually works is that the peripheral exposes a set of registers on the PCI bus and you interact with it by poking the registers (usually with MMIO). That is, you don't write code for the peripheral, but you write a kernel driver that pokes the registers (ie. the API for the peripheral) when it wants the device to do something.
Now, in general the register descriptions aren't often freely available, which can make writing drivers really hard.
If you really want/need to write your own firmware for the thing, it probably depends on where the code is stored. If it sits in ROM or in an inaccessible flash, you'll probably need to do some soldering. If the firmware is updatable, I'd probably try to reverse-engineer the software they provide for updating the firmware, if one is available. (Unless it allows uploading arbitrary files already, of course)

NFC Payment : mobile as reader and emulated card

As I understand NFC offers three modes of operations :
Reader/Writer mode :
Reading/Writing of/to NFC tags. (Coupons, SmartPoster tags)
Card Emulation mode (using the Secure Element):
Virtual cards are stored in Secure Element (PayWave, PayPass).
Peer-to-Peer mode:
Communication between two NFC enabled active devices used in contactless services ticketing, money transfers or lower security access control applications
more: About NFC
Is it possible to combine these modes, and have NFC transactions between two phones, one as an emulated card in a secure element and the second as the reader POS? all informations about the subject is appreciated
Thank you.
Yes, what you are after is possible. What you refer to as card emulation mode, is commonly associated with the term digital wallet. In this case, the phone behaves like an EMV enabled payment card and transmits the necessary signals using the phone's NFC hardware.
On the other end of the spectrum is a reader. This reader can be another phone, or can be a typical merchant terminal that you see in retail locations. As long as both parties implement the standards properly, the data exchange can occur.
The merchant terminals however typically have stronger range than the sensors found in phones, so a payment at a merchant terminal is a bit easier to execute. As new phones come out, they tend to have better NFC antennas embedded in them however (for example comparing Nexus S with S4), so hopefully this gap will close. The EMV standards dictate a 5cm range for a reader to be compliant, though I've stumbled across many readers that don't have that range.
As you have guessed probably, I'm familiar with this space. I'm a cofounder of Triangle.io and what we do is to allow you to use any Android device as a reader for free. You can learn more about our API at http://www.triangle.io if interested. To go back to the question on hand, you can use one phone in card emulation mode, and on the other, you can use our API to read the other phone's emulated card. The phone emulating the card needs to implement the EMV specifications properly.

How its made such as digicoder vcr dvd players graphical user interfaces from poweron till user interface?

I have C/Java knowledge but i never understand yet, how some hardwares show there own screens/graphics from poweron stage to user interface (where it never shows linux/unix boot screen nor it shows windows booting screens).
My question is, Compared to VCR/TV digicoders poweron till user interfaces, how its made? Do we use regular linux kernel or is there any special open source framework which allow us to develop such?
Thanks
Many embedded systems use u-boot as a boot loader. U-boot provides the ability to display a "splash" screen while the linux kernel is booting.
A device will start the bootloader right after the CPU comes out of reset (usually milliseconds after power-on at most). The bootloader code can initialize the display and show a splash screen if it wants (in the same way most modern non-embedded Linux distributions have a graphical grub splashscreen). The kernel can avoid changing the display configuration, and on an embedded device the kernel can boot pretty quickly to running userspace (at least an initramfs), which can take over the display and show whatever animation, progress bar, etc until the full UI is ready.
An operating systems such a Windows or Linux are both large and general purpose. They have to initialise themselves and the hardware, which includes interrogating all connected devices for "plug & play". The OS does not know in advance which such devices are connected; it has to "discover" the hardware every time it starts. The connected hardware may even have changed since it last booted.
Embedded systems do not usually have large operating systems (or often do not have an operating system at all), and they usually have very specific hardware known to the system a priori, so do not need to test and determine the correct configuration for such devices. Often also these devices are far simpler, and are often 'on-chip' peripherals.
That said, your PC is capable of instantly displaying a user interface (just not Windows). The BIOS boot process outputs text to the display almost immediately, and the BIOS console is an interactive user interface that starts on request during boot. Also last time I booted MS-DOS on a modern PC, it took only a few seconds to start.
Not all embedded systems start "instant-on", my digital TV PVR even has a progress bar while booting, but being application specific, it still starts far faster than a general purpose computer. My Network Attached Storage (NAS) device which is an embedded system running Linux on the other hand, takes considerable time to boot since among other things, it has to start the file-system, network, USB interfaces, print server, DNLA server, and web-server. In fact many of the things required for a general purpose computer (but it has no display, the UI is presented via the web-server)
Some embedded systems with large operating systems and complex hardware can achieve "instant-on" by never truly switching off, but rather going into a low power mode where the system state is retained in memory while all the high powered devices such as a screen, WiFi, Bluetooth etc. are switched off.

Resources