Chapter 13: Concurrency in C

Introduction

Concurrency is an essential concept in modern programming, allowing multiple operations to be executed simultaneously. In C, managing concurrency efficiently can lead to performance improvements and more responsive applications. This chapter explores various techniques and tools for implementing concurrency in C, including multithreading, synchronization mechanisms, and parallel programming.

Understanding Concurrency

Concurrency involves multiple processes or threads running simultaneously, sharing resources such as memory and CPU time. Properly managing these shared resources is crucial to avoid issues like data races, deadlocks, and race conditions.

Multithreading

Multithreading is a form of concurrency where multiple threads within a single process execute independently while sharing the same memory space. The POSIX Threads (pthreads) library is commonly used in C for creating and managing threads.

Creating and Managing Threads

To use pthreads, include the <pthread.h> header file. Here’s how to create and manage threads:

Example:

#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>

void* threadFunction(void* arg) {
int* threadId = (int*)arg;
printf("Thread %d is running\n", *threadId);
pthread_exit(NULL);
}

int main() {
pthread_t threads[5];
int threadIds[5];
int result;

for (int i = 0; i < 5; i++) {
threadIds[i] = i + 1;
result = pthread_create(&threads[i], NULL, threadFunction, (void*)&threadIds[i]);
if (result) {
printf("Error creating thread %d\n", result);
exit(-1);
}
}

for (int i = 0; i < 5; i++) {
pthread_join(threads[i], NULL);
}

return 0;
}

Synchronization

When multiple threads access shared resources, synchronization is necessary to prevent data races and ensure data consistency. Mutexes (mutual exclusions) are commonly used for this purpose.

Example:

#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>

pthread_mutex_t lock;
int counter = 0;

void* threadFunction(void* arg) {
pthread_mutex_lock(&lock);
counter++;
printf("Counter value: %d\n", counter);
pthread_mutex_unlock(&lock);
pthread_exit(NULL);
}

int main() {
pthread_t threads[5];
int result;

if (pthread_mutex_init(&lock, NULL)) {
printf("Error initializing mutex\n");
return 1;
}

for (int i = 0; i < 5; i++) {
result = pthread_create(&threads[i], NULL, threadFunction, NULL);
if (result) {
printf("Error creating thread %d\n", result);
exit(-1);
}
}

for (int i = 0; i < 5; i++) {
pthread_join(threads[i], NULL);
}

pthread_mutex_destroy(&lock);

return 0;
}

Condition Variables

Condition variables are synchronization primitives that enable threads to wait for certain conditions to be met. They are used in conjunction with mutexes.

Example:

#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>

pthread_mutex_t lock;
pthread_cond_t cond;
int ready = 0;

void* waitThread(void* arg) {
pthread_mutex_lock(&lock);
while (!ready) {
pthread_cond_wait(&cond, &lock);
}
printf("Thread proceeding\n");
pthread_mutex_unlock(&lock);
pthread_exit(NULL);
}

void* signalThread(void* arg) {
pthread_mutex_lock(&lock);
ready = 1;
pthread_cond_signal(&cond);
pthread_mutex_unlock(&lock);
pthread_exit(NULL);
}

int main() {
pthread_t thread1, thread2;

pthread_mutex_init(&lock, NULL);
pthread_cond_init(&cond, NULL);

pthread_create(&thread1, NULL, waitThread, NULL);
pthread_create(&thread2, NULL, signalThread, NULL);

pthread_join(thread1, NULL);
pthread_join(thread2, NULL);

pthread_cond_destroy(&cond);
pthread_mutex_destroy(&lock);

return 0;
}

Parallel Programming

Parallel programming involves dividing a task into subtasks that can be processed simultaneously, typically across multiple CPU cores. The OpenMP library is often used for parallel programming in C.

Using OpenMP

OpenMP provides a simple and flexible interface for parallel programming. Include the <omp.h> header file and use compiler directives to parallelize your code.

Example:

#include <omp.h>
#include <stdio.h>

int main() {
int sum = 0;

#pragma omp parallel for reduction(+:sum)
for (int i = 0; i < 1000; i++) {
sum += i;
}

printf("Sum: %d\n", sum);

return 0;
}

Parallelizing Loops

Parallelizing loops is one of the most common uses of OpenMP. The #pragma omp parallel for directive splits the loop iterations across multiple threads.

Example:

#include <omp.h>
#include <stdio.h>

int main() {
int array[1000];
int i;

#pragma omp parallel for
for (i = 0; i < 1000; i++) {
array[i] = i * i;
}

for (i = 0; i < 10; i++) {
printf("array[%d] = %d\n", i, array[i]);
}

return 0;
}

Concurrency Issues and Solutions

Deadlocks

A deadlock occurs when two or more threads are waiting indefinitely for each other to release resources. To avoid deadlocks, ensure that threads acquire resources in a consistent order and use timeouts for resource acquisition.

Race Conditions

Race conditions occur when multiple threads access shared resources without proper synchronization, leading to unpredictable results. Use mutexes, condition variables, and atomic operations to prevent race conditions.

Atomic Operations

Atomic operations are indivisible operations that complete without interruption. They are useful for implementing lock-free data structures and algorithms.

Example:

#include <stdatomic.h>
#include <stdio.h>

int main() {
atomic_int counter = 0;

atomic_fetch_add(&counter, 1);
printf("Counter value: %d\n", atomic_load(&counter));

return 0;
}

Conclusion

Concurrency in C enables the development of high-performance and responsive applications by allowing multiple tasks to be executed simultaneously. This chapter covered essential concurrency concepts, including multithreading, synchronization mechanisms, parallel programming, and common concurrency issues. By mastering these topics, you can create efficient, scalable, and robust C programs that effectively leverage modern multi-core processors and handle concurrent tasks with ease.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *