Pacman game in C language - c

I need to implement a 2-player pacman in C. The game will accept users apart from the two playing, however in view-only mode. Then they are admitted to the game in FIFO fashion.
I'm not so sure about which approach to take. I'll be definitely using the ncurses library to deal with the graphical aspect of the game. However I'm not sure about which IPC structure to use. Excluding the socket API, what do you think would be best and most straightforward way to deal with this problem?

Excluding the socket API, including only the low level APIs, I would use named pipes to get the job done quickest.

I think it's more complicated to think of this as an only two player game.
Easier to think in terms of a generalized client-server arrangement, with any number of players.
Have a server holding the game state, with clients connecting. That arrangement is easily understood and worked with.
Having only two clients and each maintaining the game state while receiving updates from the other is awkward.
Either way, use sockets. That way you get proper location independence.

Related

Design of a Communication Stack on CAN

My Goal is to design and implement a portable Communication Stack on CAN.
To be simple let's assume that the protocol stack i want implement is composed of the following Layers :
1) Data Link Layer : CAN driver and so on
2) Communication Layer: Handle the filtering of the frame in reception and Manage the sending of periodic / Event triggered frames
3) Transport Layer :Manage the segmentation of messages (Standard CAN protocol only allows a frames with length of 8 Byte)
4) Application Layer : defined by the end user
The choice of my design is to build the comunication stack around a non preemptif scheduler and to consider each layer as a task of the scheduler the communication between the different layers is done using communication mechanisms mutex and queues ext.
The questions are:
1) Could this be a good design or there is much easier one
2) How do the Communication stacks really work? , I mean what is the "engine" behind the application layer, Is it a scheduler? or the management of the communication between layers is defined by the end user?
3) Could anyone point me to a free and easy implementation (Ideally in c) of a communication stack (Not necessary for CAN)
Thank you in advance
You should consider using an existing protocol on top of CAN. This could be CANopen. A free implementation is CAN Festival
Transport Layer :Manage the segmentation of messages
No, this is the application layer. It doesn't make sense to handle segmentation unless you have a high-level protocol specifying which CAN identifiers to use and the nature of the data.
The application layer in this case needs to be implemented by you and not the end user. Otherwise you are not making a protocol stack, but merely some glorified CAN driver. Which identifiers are there? The nature of the data? Priorities? How are messages scheduled on the bus over time? Is the system sending data repeatedly and synchronously, or is it event-driven? Are RTR frames used and how? And so on.
How do the Communication stacks really work? , I mean what is the "engine" behind the application layer, Is it a scheduler? or the management of the communication between layers is defined by the end user?
This is quite a broad question, but generally such stacks are event-driven. There a message pump directing incoming data to whoever needs it. The CAN stack need to implement some sort of hardware timer for a given hardware port, to keep track of message timing, but possibly also to keep track of itself.
Some stacks have the possibility to take a "time slice" as parameter and then schedule themselves in a way. Others are built on the concept of doing as little as possible each time they are called, but instead count on constantly getting called repeatedly from the main loop. Whichever makes most sense depends on the end application, really. The former concept might make most sense in applications with RTOS or low power applications. The latter makes most sense for high integrity, fast response systems, like for example a car ECU.
3) Could anyone point me to a free and easy implementation (Ideally in c) of a communication stack (Not necessary for CAN)
(Please note that asking for external resources like libraries is off-topic on SO.)
http://www.canfestival.org is a free CANopen stack. I haven't used it myself, so I have no idea of the quality.

Reliable way to send file over internet

First of all: I'm not absolutely certain that this is the right place to ask, but I think the question fits here better then on superuser or serverfault, since it is a question from a programmer's perspective: I figured more programmers might have had the same question (although I couldn't find this specific question!).
I would like to have a feature in my program which allows users to send files to a 'friend'. You can find friends via an username: this all goes via a server which can provide the IP-adress of a friend.
I wanted to use a tcp connection to send the file. This becomes difficult, however, when one (or both) of the parties is behind a NAT. What is the best way to solve this? I heard that it's possible to send stuff via a server, but I'd rather send everything directly, to prevent server overhead.
I heard about a technique called hole punching, but also that it's pretty complex to implement and not 100% reliable. I could use UDP and implement some scheme to improve the reliability, but this seems a bit complex to me. I know skype, bittorrent and a whole lot of other programs do similiar things (but I don't know about the specifics, which protocol they use, if they use hole punching etc.).
I looked into FTP a bit, until I realised that this is just a protocol using TCP, so I should use TCP hole punching in order to let this work... Anyway, I hope someone can give me some advice on this :)
If you don't want to make data pass through a server, I'm not aware of other methods other than TCP Hole Punching or simple Port forwarding of a previously choosen port.

OS X/Linux audio playback with an event-based interface?

I'm working on a streaming audio player for Linux/OS X with a bizarre use case that has convinced me nothing that already exists will work. For the first portion, I just want to receive MP3 data and play it. I'm currently using libmad for decoding and libao for playback. My problem is with libao, and I'm not convinced it's my best option.
In particular, the ao_play function is blocking. It doesn't return until the entire buffer passed to it has been played. This doesn't give enough time to decode blocks between calls to ao_play, so the decoding has to be done either entirely ahead of time, or concurrently. Since this is intended to be streaming, I'm rejecting ahead-of-time decoding offhand. (It's conceivable I could send more than an hour's worth of audio data - I don't want to use that much memory.) This leaves concurrency. But while pthreads is standard across Linux and OS X, many of the surrounding libraries are not. I'm not really convinced I want to go to concurrency - so I'm reconsidering my choice of libao.
For my application, the best model I can think of for audio playback would be getting a file descriptor I could select on to get notified when it's ready for writes, then issue non-blocking writes to. (This is due to the rest of the details of the use case, which imply I really want a select loop anyway.)
Is there a library that works on both Linux and OS X that works this way?
Although it's much hated, PulseAudio basically works exactly like you describe (using the Asynchronous API, not the simple one).
Unless what you want to do involves low-latencies or advanced sound work, in which case you might want to look at the JACK Audio Connection Kit.
PortAudio is your one. It has a simple callback driven API. It is cross-platform and low-latency. It is the best solution if you don't need any fancy features (3D, audio-graphs,...).

C HTTP server - multithreading model?

I'm currently writing an HTTP server in C so that I'll learn about C, network programming and HTTP. I've implemented most of the simple stuff, but I'm only handling one connection at a time. Currently, I'm thinking about how to efficiently add multitasking to my project. Here are some of the options I thought about:
Use one thread per connection. Simple but can't handle many connections.
Use non-blocking API calls only and handle everything in one thread. Sounds interesting but using select()s and such excessively is said to be quite slow.
Some other multithreading model, e.g. something complex like lighttpd uses. (Probably) the best solution, but (probably) too difficult to implement.
Any thoughts on this?
There is no single best model for writing multi-tasked network servers. Different platforms have different solutions for high performance (I/O completion ports, epoll, kqueues). Be careful about going for maximum portability: some features are mimicked on other platforms (i.e. select() is available on Windows) and yield very poor performance because they are simply mapped onto some other native model.
Also, there are other models not covered in your list. In particular, the classic UNIX "pre-fork" model.
In all cases, use any form of asynchronous I/O when available. If it isn't, look into non-blocking synchronous I/O. Design your HTTP library around asynchronous streaming of data, but keep the I/O bit out of it. This is much harder than it sounds. It usually implies writing state machines for your protocol interpreter.
That last bit is most important because it will allow you to experiment with different representations. It might even allow you to write a compact core for each platform local, high-performance tools and swap this core from one platform to the other.
Yea, do the one that's interesting to you. When you're done with it, if you're not utterly sick of the project, benchmark it, profile it, and try one of the other techniques. Or, even more interesting, abandon the work, take the learnings, and move on to something completely different.
You could use an event loop as in node.js:
Source code of node (c, c++, javascript)
https://github.com/joyent/node
Ryan Dahl (the creator of node) outlines the reasoning behind the design of node.js, non-blocking io and the event loop as an alternative to multithreading in a webserver.
http://www.yuiblog.com/blog/2010/05/20/video-dahl/
Douglas Crockford discusses the event loop in Scene 6: Loopage (Friday, August 27, 2010)
http://www.yuiblog.com/blog/2010/08/30/yui-theater-douglas-crockford-crockford-on-javascript-scene-6-loopage-52-min/
An index of Douglas Crockford's above talk (if further background information is needed). Doesn't really apply to your question though.
http://yuiblog.com/crockford/
Look at your platforms most efficient socket polling model - epoll (linux), kqueue (freebsd), WSAEventSelect (Windows). Perhaps combine with a thread pool, handle N connections per thread. You could always start with select then replace with a more efficient model once it works.
A simple solution might be having multiple processes: have one process accept connections, and as soon as the connection is established fork and handle the connection in that child process.
An interesting variant of this technique is used by SER/OpenSER/Kamailio SIP proxy: there's one main process that accepts the connections and multiple child worker processes, connected via pipes. The parent sends the new filedescriptor through the socket. See this book excerpt at 17.4.2. Passing File Descriptors over UNIX Domain Sockets. The OpenSER/Kamailio SIP proxies are used for heavy-duty SIP processing where performance is a huge issue and they do very well with this technique (plus shared memory for information sharing). Multi-threading is probably easier to implement, though.

Where can I find benchmarks on different networking architectures?

Where can I find benchmarks on different networking architectures?
I am playing with sockets / threads / forks and I'd like to know what the best is. I was thinking there has got to be a place where someone has already spelled out all the pros and cons of different architectures for a socket service, listed benchmarks with code that runs.
Ultimately I'd like to run these various configurations with my own code and see which runs best in different circumstances.
Many people I talk to say that I should just use single threaded select. But I see an argument for threads when you're storing state information inside the thread to keep code simple. What is the trade off mark for writing my own state structure vs using a proven thread architecture.
I've also been told forking is bad... but when you need 12000 connections on a machine that cannot raise the open file per process limit, forking is an option! Forking is also a nice option for stability when you've got one process that needs restarting, it doesn't disturb the others.
Sorry, this is one of my longer questions... so many variables are left empty.
Thanks,
Chenz
edit: here's the link I was looking for, which is a whole paper answering your question. http://www.kegel.com/c10k.html
There are web servers designed along all three models (fork, thread, select). People like to benchmark web servers.
http://www.lighttpd.net/benchmark
Libevent has some benchmarks and links to stuff about how to choose a select() vs. threaded model, generally in favour of using the libevent model.
http://monkey.org/~provos/libevent/
It's very difficult to answer this question as so much depends on what your service is actually doing. Does it have to query a database? read files from the filesystem? perform complicated calculations? go off and talk to some other service? Also, how long-lived are client connections? Might connections have some semantic interaction with other connections, or are they all treated as independent of each other? Might you want to think about load-balancing your service across multiple servers later? (If so, you might usefully think about that now so that any necessary help can be designed in from the start.)
As you hint, the serving machine might have limits which interact with the various techniques, steering you towards one answer or another. You have a per-process file descriptor limit, but remember that you may also have a fixed size process table! How many concurrent clients are you expecting, anyway?
If your service keeps crashing and you need to keep restarting it or you think you want a multi-process model so that connections are isolated from each other, you're probably doing it wrong. Stability is extremely important in this sort of context, and that means good practice and memory hygiene, both in general and in the face of network-based attacks.
Remember the history... fork() is cheap in the Unix world, but spawning new processes relatively expensive on Windows. OTOH, Windows threads are lightweight, whereas threading has always been a bit alien to Unix and only relatively recently become widespread.

Resources