I want to use semaphores to make threads block until the semaphores become positive; but when my threads block on a semaphore, they unblock immediately on the first sem_post called on it, regardless of the value of that semaphore afterwards.
The situation is as follows:
Semaphore is set to -1.
A thread is started to wait 5 seconds before posting to the semaphore.
Once 5 seconds pass, the thread posts to the semaphore.
The semaphore's value is 0, but the main process unblocks anyway.
Here is the code demonstrating the situation:
/**
* testing whether sem_post on semaphore unblocks threads
* despite still being negative.
**/
#include <semaphore.h>
#include <pthread.h>
#include <stdio.h>
#include <unistd.h>
sem_t sem;
void *proc(void *arg)
{
sleep(5);
puts("posting");
sem_post(&sem);
}
void setup()
{
pthread_t pt;
pthread_create(&pt, 0, proc, 0);
}
int main(int argc, char **argv)
{
sem_init(&sem, 0, -1);
setup();
sem_wait(&sem);
return 0;
}
I have found a crude way to overcome this issue by wrapping sem_wait in a busy loop that checks if the semaphore's value is 0, but this seems like a wrong way to approach this.
Is there a more elegant way to block on a semaphore until its value is positive (not 0, nor negative)?
You can't initialize the semaphore to -1 because the API to initialize it only takes unsigned values.
int sem_init(sem_t *sem, int pshared, unsigned int value);
The Open Group
Probably, using -1 caused a value larger than SEM_VALUE_MAX to be passed to sem_init(), and the initialization actually failed. You should check the return code, and if errno got set to EINVAL.
The sem_init() function will fail if:
[EINVAL]
The value argument exceeds SEM_VALUE_MAX.
...
The Open Group
If you want threads to block on a semaphore on first call to sem_wait, initialize the semaphore to 0 instead.
If your system supports it, you can use sem_getvalue to find the number of waiters on the semaphore if the count is 0.
If sem is locked, then the value returned by sem_getvalue() is either zero or a negative number whose absolute value represents the number of processes waiting for the semaphore at some unspecified time during the call.
The Open Group
An implementation is allowed to choose which behavior they want, so this only works if your system supports it. Linux, for example, does not support it, since it chooses to return 0 if the semaphore count is 0.
If one or more processes or threads are blocked waiting to lock the
semaphore with sem_wait(3), POSIX.1 permits two possibilities for the
value returned in sval: either 0 is returned; or a negative number
whose absolute value is the count of the number of processes and
threads currently blocked in sem_wait(3). Linux adopts the former
behavior.
man7.org
Probably the most reliable way for you to implement what you want is to use a mutual exclusion variable, a condition variable, and a counter.
extern pthread_mutex_t lock;
extern pthread_cond_t queue;
extern atomic_uint waiters;
/* assume waiters is initially 0 */
pthread_mutex_lock(&lock);
++waiters;
while (my_status() == WAITING) {
pthread_cond_wait(&queue, &lock);
}
--waiters;
pthread_mutex_unlock(&lock);
Try initializing the semaphore with zero or a positive integer value. If you initialize with zero sem_init(&sem, 0, 0); you will see the desired behavior.
Related
I'm doing a C application that reads and parses data from a set of sensors and, according to the readings of the senors, it turns on or off actuators.
For my application I will be using two threads, one to read and parse the data from the sensors and another one to act on the actuators. Obviously we may face the problem of one thread reading data from a certain variable while another one is trying to write on it. This is a sample code.
#include <pthread.h>
int sensor_values;
void* reads_from_sensor(){
//writes on sensor_values, while(1) loop
}
void* turns_on_or_off(){
//reads from sensor_values, while(1) loop
}
int main(){
pthread_t threads[2];
pthread_create(&threads[1],NULL,reads_from_sensor,NULL);
pthread_create(&threads[2],NULL,turns_on_or_off,NULL);
//code continues after
}
My question is how I can solve this issue, of a certain thread writing on a certain global variable while other thread is trying to read from it, at the same time. Thanks in advance.
OP wrote in the comments
The project is still in an alpha stage. I'll make sure I optimize it once it is done. #Pablo, the shared variable is sensor_values. reads_from_sensors write on it and turns_on_or_off reads from it.
...
sensor_value would be a float as it stores a value measured by a certain sensor. That value can either be voltage, temperature or humidity
In that case I'd use conditional variables using pthread_cond_wait and
pthread_cond_signal. With these functions you can synchronize threads
with each other.
The idea is that both threads get a pointer to a mutx, the condition variable
and the shared resource, whether you declared them a global or you pass them as
thread arguments, doesn't change the idea. In the code below I'm passing all
of these as thread arguments, because I don't like global variables.
The reading thread would lock the mutex and when it reads a new value of the
sensor, it writes the new value in the shared resource. Then it call
pthread_cond_signal to send a signal to the turning thread that a new value
arrived and that it can read from it.
The turning thread would also lock the mutex and execute pthread_cond_wait to
wait on the signal. The locking must be done in that way, because
pthread_cond_wait will release the lock and make the thread block until the
signal is sent:
man pthread_cond_wait
DESCRIPTION
The pthread_cond_timedwait() and pthread_cond_wait() functions shall block on a condition variable. The application shall ensure that
these functions are called with mutex locked by the calling thread; otherwise, an error (for PTHREAD_MUTEX_ERRORCHECK and robust
mutexes) or undefined behavior (for other mutexes) results.
These functions atomically release mutex and cause the calling thread to block on the condition variable cond; atomically here means
atomically with respect to access by another thread to the mutex and then the condition variable. That is, if another thread is
able to acquire the mutex after the about-to-block thread has released it, then a subsequent call to pthread_cond_broadcast() or
pthread_cond_signal() in that thread shall behave as if it were issued after the about-to-block thread has blocked.
Example:
#include <stdio.h>
#include <pthread.h>
#include <stdlib.h>
#include <time.h>
#include <unistd.h>
struct thdata {
pthread_mutex_t *mutex;
pthread_cond_t *cond;
int *run;
float *sensor_value; // the shared resource
};
void *reads_from_sensors(void *tdata)
{
struct thdata *data = tdata;
int i = 0;
while(*data->run)
{
pthread_mutex_lock(data->mutex);
// read from sensor
*data->sensor_value = (rand() % 2000 - 1000) / 10.0;
// just for testing, send a singnal only every
// 3 reads
if((++i % 3) == 0)
{
printf("read: value == %f, sending signal\n", *data->sensor_value);
pthread_cond_signal(data->cond);
}
pthread_mutex_unlock(data->mutex);
sleep(1);
}
// sending signal so that other thread can
// exit
pthread_mutex_lock(data->mutex);
pthread_cond_signal(data->cond);
pthread_mutex_unlock(data->mutex);
puts("read: bye");
pthread_exit(NULL);
}
void *turns_on_or_off (void *tdata)
{
struct thdata *data = tdata;
while(*data->run)
{
pthread_mutex_lock(data->mutex);
pthread_cond_wait(data->cond, data->mutex);
printf("turns: value read: %f\n\n", *data->sensor_value);
pthread_mutex_unlock(data->mutex);
usleep(1000);
}
puts("turns: bye");
pthread_exit(NULL);
}
int main(void)
{
srand(time(NULL));
struct thdata thd[2];
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
// controlling vars
int run_rfs = 1;
int run_tof = 1;
float sensor_value;
thd[0].run = &run_rfs;
thd[1].run = &run_tof;
thd[0].mutex = &mutex;
thd[1].mutex = &mutex;
thd[0].cond = &cond;
thd[1].cond = &cond;
thd[0].sensor_value = &sensor_value;
thd[1].sensor_value = &sensor_value;
pthread_t th[2];
printf("Press ENTER to exit...\n");
pthread_create(th, NULL, reads_from_sensors, thd);
pthread_create(th + 1, NULL, turns_on_or_off, thd + 1);
getchar();
puts("Stopping threads...");
run_rfs = 0;
run_tof = 0;
pthread_join(th[0], NULL);
pthread_join(th[1], NULL);
return 0;
}
Output:
$ ./a
Press ENTER to exit...
read: value == -99.500000, sending signal
turns: value read: -99.500000
read: value == -25.200001, sending signal
turns: value read: -25.200001
read: value == 53.799999, sending signal
turns: value read: 53.799999
read: value == 20.400000, sending signal
turns: value read: 20.400000
Stopping threads...
read: bye
turns: value read: 20.400000
turns: bye
Note that in the example I only send the signal every 3 seconds (and do a long
sleep(1)) for testing purposes, otherwise the terminal would overflow immediately
and you would have a hard time reading the output.
See also: understanding of pthread_cond_wait() and pthread_cond_signal()
Your question is too generic. There are different multithread synchronization methods mutex, reader-writer locks, conditional variables and so on.
The easiest and most simple are mutex (mutual excluasion). They are pthread_mutex_t type variables. You first need to initialize them; you can do it in two ways:
assigning to the mutex variable the constant value PTHREAD_MUTEX_INITIALIZER
calling the funtion pthread_mutex_init
Then before reading or writing a shared variable you call the function int pthread_mutex_lock(pthread_mutex_t *mutex); and after exited the critical section you must release the critical section by calling int pthread_mutex_unlock(pthread_mutex_t *mutex);.
If the resource is busy the lock will block the execution of your code until it gets released. If you want to avoid that take a look at int pthread_mutex_trylock(pthread_mutex_t *mutex);.
If your program has much more reads than writes on the same shared variable, take a look at the Reader-Writer locks.
For example I want to create 5 threads and print them. How do I make the fourth one execute before the second one? I tried locking it with a mutex, but I don't know how to make only the second one locked, so it gives me segmentation fault.
Normally, you define the order of operations, not the threads that do those operations. It may sound like a trivial distinction, but when you start implementing it, you'll see it makes for a major difference. It is also more efficient approach, because you don't think of the number of threads you need, but the number of operations or tasks to be done, and how many of them can be done in parallel, and how they might need to be ordered or sequenced.
For learning purposes, however, it might make sense to look at ordering threads instead.
The OP passes a pointer to a string for each worker thread function. That works, but is slightly odd; typically you pass an integer identifier instead:
#include <stdlib.h>
#include <inttypes.h>
#include <pthread.h>
#define ID_TO_POINTER(id) ((void *)((intptr_t)(id)))
#define POINTER_TO_ID(ptr) ((intptr_t)(ptr))
The conversion of the ID type -- which I assume to be a signed integer above, typically either an int or a long -- to a pointer is done via two casts. The first cast is to intptr_t type defined in <stdint.h> (which gets automatically included when you include <inttypes.h>), which is a signed integer type that can hold the value of any void pointer; the second cast is to a void pointer. The intermediate cast avoids a warning in case your ID is of an integer type that cannot be converted to/from a void pointer without potential loss of information (usually described in the warning as "of different size").
The simplest method of ordering POSIX threads, that is not that dissimilar to ordering operations or tasks or jobs, is to use a single mutex as a lock to protect the ID of the thread that should run next, and a related condition variable for threads to wait on, until their ID appears.
The one problem left, is to how to define the order. Typically, you'd simply increment or decrement the ID value -- decrementing means the threads would run in descending order of ID value, but the ID value of -1 (assuming you number your threads from 0 onwards) would always mean "all done", regardless of the number of threads used:
static pthread_mutex_t worker_lock = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t worker_wait = PTHREAD_COND_INITIALIZER;
static int worker_id = /* number of threads - 1 */;
void *worker(void *dataptr)
{
const int id = POINTER_TO_ID(dataptr);
pthread_mutex_lock(&worker_lock);
while (worker_id >= 0) {
if (worker_id == id) {
/* Do the work! */
printf("Worker %d running.\n", id);
fflush(stdout);
/* Choose next worker */
worker_id--;
pthread_cond_broadcast(&worker_wait);
}
/* Wait for someone else to broadcast on the condition. */
pthread_cond_wait(&worker_wait, &worker_lock);
}
/* All done; worker_id became negative.
We still hold the mutex; release it. */
pthread_mutex_unlock(&worker_lock);
return NULL;
}
Note that I didn't let the worker exit immediately after its task is done; this is because I wanted to expand the example a bit: let's say you want to define the order of operations in an array:
static pthread_mutex_t worker_lock = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t worker_wait = PTHREAD_COND_INITIALIZER;
static int worker_order[] = { 0, 1, 2, 3, 4, 2, 3, 1, 4, -1 };
static int *worker_idptr = worker_order;
void *worker(void *dataptr)
{
const int id = POINTER_TO_ID(dataptr);
pthread_mutex_lock(&worker_lock);
while (*worker_idptr >= 0) {
if (*worker_idptr == id) {
/* Do the work! */
printf("Worker %d running.\n", id);
fflush(stdout);
/* Choose next worker */
worker_idptr++;
pthread_cond_broadcast(&worker_wait);
}
/* Wait for someone else to broadcast on the condition. */
pthread_cond_wait(&worker_wait, &worker_lock);
}
/* All done; worker_id became negative.
We still hold the mutex; release it. */
pthread_mutex_unlock(&worker_lock);
return NULL;
}
See how little changed?
Let's consider a third case: a separate thread, say the main thread, decides which thread will run next. In this case, we need two condition variables: one for the workers to wait on, and the other for the main thread to wait on.
static pthread_mutex_t worker_lock = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t worker_wait = PTHREAD_COND_INITIALIZER;
static pthread_cond_t worker_done = PTHREAD_COND_INITIALIZER;
static int worker_id = 0;
void *worker(void *dataptr)
{
const int id = POINTER_TO_ID(dataptr);
pthread_mutex_lock(&worker_lock);
while (worker_id >= 0) {
if (worker_id == id) {
/* Do the work! */
printf("Worker %d running.\n", id);
fflush(stdout);
/* Notify we are done. Since there is only
one thread waiting on the _done condition,
we can use _signal instead of _broadcast. */
pthread_cond_signal(&worker_done);
}
/* Wait for a change in the worker_id. */
pthread_cond_wait(&worker_wait, &worker_lock);
}
/* All done; worker_id became negative.
We still hold the mutex; release it. */
pthread_mutex_unlock(&worker_lock);
return NULL;
}
The thread that decides which worker should run first should hold the worker_lock mutex when the worker threads are created, then wait on the worker_done condition variable. When the first worker completes its task, it will signal on the worker_cone condition variable, and wait on the worker_wait condition variable. The decider thread should then change the worker_id to the next ID that should run, and broadcast on the worker_wait condition variable. This continues, until the decider thread sets worker_id to a negative value. For example:
int threads; /* number of threads to create */
pthread_t *ptids; /* already allocated for that many */
pthread_attr_t attrs;
int i, result;
/* Simple POSIX threads will work with 65536 bytes of stack
on all architectures -- actually, even half that. */
pthread_attr_init(&attrs);
pthread_attr_setstacksize(&attrs, 65536);
/* Hold the worker_lock. */
pthread_mutex_lock(&worker_lock);
/* Create 'threads' threads. */
for (i = 0; i < threads; i++) {
result = pthread_create(&(ptids[i]), &attrs, worker, ID_TO_POINTER(i));
if (result) {
fprintf(stderr, "Cannot create worker threads: %s.\n", strerror(result));
exit(EXIT_FAILURE);
}
}
/* Thread attributes are no longer needed. */
pthread_attr_destroy(&attrs);
while (1) {
/*
TODO: Set worker_id to a new value, or
break when done.
*/
/* Wake that worker */
pthread_cond_broadcast(&worker_wait);
/* Wait for that worker to complete */
pthread_cond_wait(&worker_done, &worker_lock);
}
/* Tell workers to exit */
worker_id = -1;
pthread_cond_broadcast(&worker_wait);
/* and reap the workers */
for (i = 0; i < threads; i++)
pthread_join(ptids[i], NULL);
There is a very important detail in all of the above examples, that may be hard to understand without a lot of practice: the way how mutexes and condition variables interact (if paired via pthread_cond_wait()).
When a thread calls pthread_cond_wait(), it will atomically release the specified mutex, and wait for new signals/broadcasts on the condition variable. "Atomic" means that there is no time inbetween the two; nothing can occur in between. The call returns when a signal or broadcast is received -- the difference is that a signal goes to only one, a random waiter; whereas a broadcast reaches all threads waiting on the condition variable --, and the thread acquires the lock. You can think of this as if the signal/broadcast first wakes up the thread, but the pthread_cond_wait() will only return when it re-acquires the mutex.
This behaviour is implicitly used in all of the examples above. In particular, you'll notice that the pthread_cond_signal()/pthread_cond_broadcast() is always done while holding the worker_lock mutex; this ensures that the other thread or threads wake up and get to act only after the worker_lock mutex is unlocked -- either explicitly, or by the holding thread waiting on a condition variable.
I thought I might draw a directed graph (using Graphviz) about the order of events and actions, but this "answer" is already too long. I do suggest you do it yourself -- perhaps on paper? -- as that kind of visualization has been very useful for myself when I was learning about all this stuff.
I do feel quite uncomfortable about the above scheme, I must admit. At any one time, only one thread is running, and that is basically wrong: any job where tasks should be done in a specific order, should only require one thread.
However, I showed the above examples in order for you (not just OP, but any C programmer interested in POSIX threads) to get more comfortable about how to use mutexes and condition variables.
Consider the code snippet given below :
#include <pthread.h>
#include <semaphore.h>
sem_t empty;
sem_t full;
sem_t mutex;
int main(int argc, char *argv[])
{
int MAX = 10;//Size of the Buffer
sem_init(&empty, 0, MAX);
sem_init(&full, 0, 0);
sem_init(&mutex, 0, 1);
return 0;
}
Only the required code,I have mentioned above. It's a part of Producer-Consumer Code. What are the meanings of each parameters in sem_init()? I could make out that 1st parameter is the address of the semaphore variable and 3rd one is it's value.
Why the 2nd parameter is always 0? What does that mean?
Are we specifying critical value for the semaphore to wait using 2nd parameter?
wait(S) {
while (S <= 0 )
; // busy wait
S--;
}
If I pass 3 as 2nd parameter to sem_init(), does the while loop in wait(S) will be changed to
while (S <= 3 )
like this?
Always try to read Linux documentation(man <command or system_call>) for these type of doubts.
for your case man sem_init
sem_init() initializes the unnamed semaphore at the address pointed
to by sem. The value argument specifies the initial value for the
semaphore.
web link of the man pages
Please have a look at http://man7.org/linux/man-pages/man3/sem_init.3.html. Regarding the second argument:
The pshared argument indicates whether this semaphore is to be shared
between the threads of a process, or between processes.
This is essentially Boolean, but read the link for more info. If you're really threading than this should always be 0, in extreme cases you use threading code on different processes use non-zero.
I have 2 programs that need to communicate with each other, the first one must output what the second one puts into the shared memory, but I've removed everything but the semaphores, because there is a problem with synchronization.
This is a school assignment, so I must use semaphores in exactly the way shown below, don't propse anything else because I can't use it.
#include <sys/shm.h>
#include <errno.h>
#include <sys/sem.h>
int main()
{
int semid;
struct sembuf sobs;
semid=semget(9999,1,IPC_CREAT|0600);
semctl(semid, 0, SETVAL,1);
sobs.sem_num=0;
sobs.sem_op= 0;
sobs.sem_flg=0;
semop(semid,&sobs,1);
/* DO SOMETHING */
sobs.sem_op=1;
semop(semid,&sobs,1);
shmctl(shmid1,IPC_RMID,0);
return 0;
}
And the second program:
#include <sys/shm.h>
#include <errno.h>
#include <sys/sem.h>
int main()
{
int semid;
struct sembuf sobs;
semid=semget(9999,1,0600);
sobs.sem_num=0;
sobs.sem_op=-1;
sobs.sem_flg=0;
semop(semid,&sobs,1);
/* DO SOMETHING */
sobs.sem_op=1;
semop(semid,&sobs,1);
shmctl(shmid1,IPC_RMID,0);
return 0;
}
So the problem is that if I put a sleep() above the DO SOMETHING in the second program, the first one will still go into the critical section, and it will finish before the second one, but the first one mustn't go into the critical section before the second one exits it, what can I do to prevent the first one from entering?
Typically program A should set the semaphore to 0 initially using semctl. Then program A calls semop with sobs.sem_op=-1, to wait on the semaphore.
Program B doesn't need to do anything with the semaphore before "do something". (Note that A has to be run first, so that the semaphore is created and setup before B starts.) After B is done it should signal A by calling semop with sobs.sem_op=1.
In short A sets up the semaphore and waits for it before "doing something". B "does something" and then signals the semaphore to wake A.
You're using so-called SysV (System V) semaphores, as opposed to POSIX semaphores. You might want to look up tutorials for that. Your first mistake is the first argument to semget: you aren't supposed to pass it a value that you've defined, you should use ftok(3) instead (if you chose the key yourself collisions are pretty likely: too many people will simply use 1 or 42 as key value).
Next, some explanations on how semaphores work:
A semaphore has a value.
You can increase its value (foo.sem_op > 0). This is a non-blocking operation.
You can decrease its value (foo.sem_op < 0). This may block (which is what it's used for). Explanation below.
You can wait for it to become 0 (foo.sem_op == 0). If the semaphore is already 0 it doesn't block, otherwise it blocks.
When you decrease the semaphore, the following scenarios can happen (simplified):
If decreasing the semaphore would not make it become negative the call does not block.
If decreasing the semaphore by the value in foo.sem_op would make the semaphore negative it will block until the semaphore's value has become large enough so that the decreasing won't make the semaphore negative any more.
You'd like to use the semaphore as a mutex. In this case you only want values of "1". The pattern now usually looks like this:
Initialize the semaphore, set it to 1.
The first worker decreases the semaphore by -1. This immediately succeeds.
The second worker tries to decrease the semaphore by -1. This blocks as it would make the semaphore negative.
The first worker finishes and increases the semaphore.
This wakes up the second worker which now decreases the semaphore.
When the second worker is done it also increases the semaphore.
So we have:
Lock mutex: decrease the semaphore by -1.
Unlock mutex: increase the semaphore by 1.
This question requires that two semaphores be used, one as a mutex, and one as a counting semaphore, and that the pair be used to simulate the interaction between students and a teacher's assistant.
I have been able to utilize the binary semaphore easily enough, however I cannot seem to find many examples that show the use of a counting semaphore, so I am quite sure I have it wrong, which is causing my code to not execute properly.
My code is below
#include <stdio.h>
#include <pthread.h>
#include <unistd.h>
#include <stdlib.h>
#include <semaphore.h>
#include <time.h>
#include <sys/types.h>
void *taThread();
void *student();
sem_t taMutex;
sem_t semaphore;
int main()
{
pthread_t tid1;
srand(time(NULL));
sem_init(&taMutex,0,1);
sem_init(&semaphore,1,3);
pthread_create(&tid1, NULL, &taThread, NULL);
pthread_join(tid1, NULL);
return 0;
}
void *taThread()
{
pthread_t tid2[10];
int it = 0;
printf("Teacher's Assistant taking a nap.\n");
for (it = 0; it < 10; it ++)
{
pthread_create(&tid2[it], NULL, &student, NULL);
}
for (it = 0; it < 10; it ++)
{
pthread_join(tid2[it], NULL);
}
}
void *student()
{
int xTime;
xTime = rand() % 10 + 1;
if (sem_wait(&taMutex) == 0)
{
printf("Student has awakened TA and is getting help. This will take %d minutes.\n", xTime);
sleep(xTime);
sem_post(&taMutex);
}
else if (sem_wait(&semaphore) > 2 )
{
printf("Student will return at another time.\n");
}
else
{
sem_wait(&semaphore);
printf("Student is working on their assignment until TA becomes available.\n");
sem_wait(&taMutex);
sem_post(&semaphore);
printf("Student is entering the TA's office. This will take %d minutes", xTime);
sleep(xTime);
sem_post(&taMutex);
}
}
My main question is: how can I get the threads to poll the counting semaphore concurrently?
I am trying to get a backup, with some students being forced to leave (or exit the thread) unhelped, with others waiting in the semaphore. Any assistance is appreciated, and any clarifications will be offered.
I'm not sure if your class / teacher wants to make special distinctions here, but fundamentally, a binary semaphore is mostly equivalent to a copunting semaphore initialized to 1,1 so that when you count it down ("P") to zero it becomes "busy" (locked like a mutex) and when you release it ("V") it counts up to its maximum of 1 and it's now "un-busy" (unlocked). A counting semaphore typically starts with a higher initial value, typically for counting some resource (like 3 available chairs in a room), so that when you count it down there may still be some left. When you're done using the counted resource (e.g., when the "student" leaves the "TA's office") you count it back up ("V").
With POSIX semaphores, the call:
sem_init(&semaphore,1,3);
says that this is a process-shared semaphore (2nd argument nonzero), rather than a thread-shared semaphore; you don't seem to need that, and I'm not sure if some systems might give you an error—a failing sem_init call, that is—if &semaphore is not in a process-shared region. You should be able to just use 0, 3. Otherwise, this is fine: it's saying, in effect, that there are three "unoccupied chairs" in the "office".
Other than that, you'll need to either use sem_trywait (as #pilcrow suggested), sem_timedwait, or a signal to interrupt the sem_wait call (e.g., SIGALRM), in order to have some student who tries to get a "seat" in the "office" to discover that he can't get one within some time-period. Just calling sem_wait means "wait until there's an unoccupied chair, even if that takes arbitrarily long". Only two things stop this potentially-infinite wait: either a chair becomes available, or a signal interrupts the call.
(The return value from the various sem_* functions tells you whether you "got" the "chair" you were waiting for. sem_wait waits "forever", sem_trywait waits not-at-all, and sem_timedwait waits until you "get the chair" or the clock runs out, whichever occurs first.)
1The real difference between a true binary semaphore and a counting semaphore is that a binary semaphore doesn't offer you the ability to count. It's either acquired (and an acquire will block), or not-acquired (and an acquire will succeed and block other acquires). Various implementations may consider releasing an already-released binary semaphore a no-op or an error (e.g., runtime panic). POSIX just doesn't offer binary semaphores at all: sem_init initializes a counting semaphore and it's your responsibility to set it to 1, and not over-increment it by releasing when it's already released. See also comments below.