Ideally, I would connect an Ingenico/VeriFone terminal to the net via an Ethernet cable, the terminal will exclusively run a program that I wrote. This program would poll a webservice, beep when it detects some kind of info, wait for somebody's input, transmit said info back to the webservice, and print a ticket.
Is this possible with terminals from Ingenico/VeriFone/someone else?
I'm looking for the form factor/semi-ruggedness of said terminals. We don't need/want something bigger like an PC or laptop.
I have built applications on Verifone, Hypercom and Trintech terminals. The Verifones are by far the easiest to devlop for. They have a simple flash and RAM file systems, apps are downloaded and run as files, the OS (Verix) is POSIX like with good C/C++ libraries etc. Only downside is tool cost, VerixV use ARM SDT (5K Euro per seat) and older Verix terminals (Coldfire based) use SDS compiler. Dev kit comes with default keys to sign your apps (not most secures, but you can password protect download access on terminal). I have written lots of apps on these terminals, not just payment app. Verifone multi-app controller (VMAC) is a crock of shit but it's very easy to run multiple apps yourself using pipes for inter-app comms (your apps won't run on third party terminals which use VMAC though). We used ethernet connectivity for FTP to manage app and config download as well as transaction batching. Also used WIFI on latest terminals for same (also used 3G terminals but I didnt do any of code on these). Verifone is PC-like in terms of code development and we shared lots of library/app code between WIN32/Verix/VerixV and Linux. Verifone terminals are well built and can take a lot of abuse but then most serious terminal manufacturers do a good job these days.
Related
I like to ask: do applications like Apache Webserver on Linux, Wireshark, and software like network tools and other real world applications that have to do work with network connection, do they need kernel module, If no then is it to what extent a normal practice that applications do have kernel module. Like when I install some application then kernel module installs with it. I know when I enable IIS server from windows, specific kernel module do get enabled that does IIS work. (don't know why that OS does not implement raw sockets api so developer can use)
My question: Some time ago I was trying to make tcp server using raw socket and found that it was something not that easy since kernel does all sorts of things like (correct me if I am wrong)
Checking for spoofed packets
adding its own headers info inside packets
So I am about to Make an application that does following things in kernel
Configuring NIC card like reading card registers and reporting back
shutting down Network Interface
Starting Network interface
Reading packets from DMA RX and reporting average number of packets received to detect DOS attacks and if detects DOS then shutting down specific Interface/ like reporting counter for packets
And the Application will just act as a command controller. Like a user can use the application to make changes specified in above 4 points.
So I like to ask you is it common practice for applications to have a kernel module and why this is a resorted option if someone like to choose embedded kernel module in applications,
Above things are for learning purposes.
No. Linux programs very rarely have kernel modules. Kernel modules are normally for hardware device drivers.
If a program does need a certain kernel module, it will tell you to install the module yourself. It won't include a copy of the module.
It sounds like you want to make your own driver that replaces the normal driver for your network card. It's possible, but nobody does it. If you want to shut down or start up a network interface, there is already a way to do that without writing your own kernel module. If you want to count the packets, there's already a way to do that. If you want to see all the packets, there's already a way to do that.
There's no way to read card registers already - that's because every card has different registers. But whatever you want to do with those registers, there's probably a way to do it already.
I have a smart card-like miniSD card (it's a javacard as far as I know) and I'm trying to write an emulator for it that runs on Windows and Linux. The emulator will be used in software integration tests. I want to test my client without using the actual hardware for several reasons. One reason is that the actual hardware will change its state irreversibly and doesn't allow a complete reset.
The device implements a mass storage with FAT32 file system. It contains a special device file that is being used for controlling the device via simple file write/read operations.
My goal is that the virtual (emulated) device appears with drive letter in Windows explorer as soon as the emulator is started, similar as if someone would actually plug a real device.
I wonder if there is any open software project that I can base my program on? The biggest challenges are obviously
Providing/developing a "virtual" (USB/SD) mass storage device
Intercepting file I/O operations on the special device file.
According to Wikipedia, device files are a common way to simplify driver development. So I wondered if there are existing emulation solutions for driver developers. At least I couldn't find any.
Simulating the device file itself would be an important first step. My first idea was to use a normal file and to communicate with the client by actually reading/writing to this file while observing it. I.e. clear the file as soon as the client wrote to it and write the response into it. I don't know if this could work at all. One problem is that the client doesn't open the file with shared mode, so my simulator cannot access it at the same time.
Then I found out that QEMU can emulate mass storage, however it seems that it only supports image files and that probably doesn't allow device file.
Microsoft has some documentation about how to write USB device emulators and drivers but it seems to be very complex and I wondered if there is an exisiting solution that could be extended:
Finally there is the USB/IP Project, but I don't know if it is helpful as I still need to develop a driver and then I'm back at the complex MS documentation above.
I am currently working with an embedded FOX G20 V board with an ATMEL AT91SAM9G20 processor. I am hoping to be able to establish a connection by ethernet between this board, and a linux machine. The protocol of communication is using the uIP library (smaller implementation of TCP/IP intended for embedded boards).
Anyway, I've downloaded the developpment kit offered by the processor, and it has countless examples of different types of communications, one of which includes a hello world program.
However, at this point, even with the example, I'm relitvely stuck. I am un sure which file of the hello world project it is I have to compile since there are many of them. Is it the main.c that is located in at91sam9g20-ek.zip\at91sam9g20-ek\packages\basic-emac-uip-helloworld-project-at91sam9g20-ek-iar.zip\basic-emac-uip-helloworld-project-at91sam9g20-ek\at91sam9g20-ek\basic-emac-uip-helloworld-project\ or is it another file?
The whole point is to get a communication established by the board and the remote host (in this case my Linux machine), and send it "hello world" through ethernet. I am guessing that the application in this case defines its register addresses in which the board will be able to receive the connection from the remote host (I may be wrong).
In any case, I am hoping to get help by any "experts" that are familier with the project that may guide me, or explain to me how exactly to build this application they have provided.
I'm not familiar with this board but according to this link the application is supposed to start a telnet server (on port 1000) and an http server. I suggest that you look at the output on the serial link (to get the IP of your board, let's assume 10.159.245.156 as in the example), and if you get what is expected then you can try to telnet to your board:
telnet 10.159.245.156 1000
The kit gives you project file for three toolchains (IAR 5.4, Keil and GNU). You'll have either to open the correct one depending on your toolchain (which one do you use?), or adapt if you use another one.
Edit: You apparently use the IAR toolchain, thus you need to open the *.eww file (for instance basic-emac-uip-helloworld-project.eww). This example only obtains an IP and displays statistics on the debug output (serial link?). There are other examples for a telnet or http server.
Moreover it's a detail but I think the emacs tag is irrelevant in your post. I think you confused EMAC (what is this?) and Emacs which is a popular text editor.
I have C/Java knowledge but i never understand yet, how some hardwares show there own screens/graphics from poweron stage to user interface (where it never shows linux/unix boot screen nor it shows windows booting screens).
My question is, Compared to VCR/TV digicoders poweron till user interfaces, how its made? Do we use regular linux kernel or is there any special open source framework which allow us to develop such?
Thanks
Many embedded systems use u-boot as a boot loader. U-boot provides the ability to display a "splash" screen while the linux kernel is booting.
A device will start the bootloader right after the CPU comes out of reset (usually milliseconds after power-on at most). The bootloader code can initialize the display and show a splash screen if it wants (in the same way most modern non-embedded Linux distributions have a graphical grub splashscreen). The kernel can avoid changing the display configuration, and on an embedded device the kernel can boot pretty quickly to running userspace (at least an initramfs), which can take over the display and show whatever animation, progress bar, etc until the full UI is ready.
An operating systems such a Windows or Linux are both large and general purpose. They have to initialise themselves and the hardware, which includes interrogating all connected devices for "plug & play". The OS does not know in advance which such devices are connected; it has to "discover" the hardware every time it starts. The connected hardware may even have changed since it last booted.
Embedded systems do not usually have large operating systems (or often do not have an operating system at all), and they usually have very specific hardware known to the system a priori, so do not need to test and determine the correct configuration for such devices. Often also these devices are far simpler, and are often 'on-chip' peripherals.
That said, your PC is capable of instantly displaying a user interface (just not Windows). The BIOS boot process outputs text to the display almost immediately, and the BIOS console is an interactive user interface that starts on request during boot. Also last time I booted MS-DOS on a modern PC, it took only a few seconds to start.
Not all embedded systems start "instant-on", my digital TV PVR even has a progress bar while booting, but being application specific, it still starts far faster than a general purpose computer. My Network Attached Storage (NAS) device which is an embedded system running Linux on the other hand, takes considerable time to boot since among other things, it has to start the file-system, network, USB interfaces, print server, DNLA server, and web-server. In fact many of the things required for a general purpose computer (but it has no display, the UI is presented via the web-server)
Some embedded systems with large operating systems and complex hardware can achieve "instant-on" by never truly switching off, but rather going into a low power mode where the system state is retained in memory while all the high powered devices such as a screen, WiFi, Bluetooth etc. are switched off.
In short: Is there any known protocol for remote process management?
I have a system that contains several applications, each has it's own computer in a local network. When the applications are up and running, they communicate without any problems.
What I'm interested in is a protocol to manage the remote applications startup, shutdown and monitoring. By monitoring I mean getting error codes (predefined) when something goes wrong. Ideally I would control the whole system from one managing application and get status about what's going on.
I once worked in a place that wrote an in-house protocol that did this. However, I wish to avoid writing it again if someone already figured this out.
Edit: some more details:
Platforms in use are Windows and Linux, both on x86.
On Windows, C/C++ and .NET are used. On Linux, C/C++.
Why bother with homegrown solutions instead of using tried and tested technology? Unless you only employ programmers who are MENSA members with 30+ years of experience, your solution will be less robust and costlier to maintain.
You failed to mention any details about the platform you're using, so I'll assume a Unix-ish system. I would go with (and have been going with for years)
SNMP for monitoring
either daemontools or cron + scripting (as a distant second choice) for supervision and restart
ssh/scp with RSA authentication for interactive intervention, remote command execution, and occasional transfers