*7.3. Operating System Implementation*

Dispatch queues are a feature of the Grand Central Dispatch (GCD) system, which is a part of the iOS and macOS operating systems. GCD provides a high-level, asynchronous programming interface for managing concurrent tasks. Dispatch queues are lightweight and provide a simple interface for executing tasks concurrently without consuming an excessive amount of system resources. Dispatch queues are managed by the operating system and can be used to process tasks on a first-in, first-out (FIFO) basis. This makes it easy to manage task dependencies and avoid competitive conditions, and tasks submitted to a dispatch queue can be executed in parallel with other tasks in the queue. Dispatch queues can be created with different priorities to manage the order of execution of tasks and ensure that high-priority tasks are executed first.

Threads, in contrast, are a lower-level mechanism for achieving concurrency in a program. Threads achieve true parallelism, as multiple threads can execute simultaneously on different processor cores. Each thread has its own stack and program counter, and threads can share memory with other threads in the same process. Threads are managed by the operating system and can be used to process tasks concurrently in a more finegrained way than dispatch queues. As compared to dispatch queues, threads have a higher overhead and require more system resources, making them less suitable for lightweight tasks. Threads can be used to implement more complex concurrency patterns, such as locking, synchronization, and message-passing.

The proposed solution for implementing the modified parallel *k*-means clustering algorithm on iOS leveraged the advantages of dispatch queues to achieve concurrency. The GCD framework provided several types of queues, including serial and concurrent dispatch queues. A serial dispatch queue executed tasks one at a time, while a concurrent dispatch queue executed tasks concurrently.

In the proposed solution, a concurrent dispatch queue was used to execute the *k*-means clustering algorithm on multiple cores simultaneously. Each task was scheduled on the dispatch queue, and the queue handled the scheduling of tasks across multiple cores. This allowed the algorithm to take advantage of the multi-core neural engine processor and general-purpose processor, leading to improved performance.

Furthermore, GCD provided mechanisms to ensure thread safety and avoid competitive conditions through the use of synchronization techniques, such as semaphores and barriers. By utilizing these features, the implementation of the parallel *k*-means clustering algorithm on dispatch queues was more efficient and reliable.

### **8. Results and Discussion**

The proposed work was subjected to thorough testing and evaluation in multiple stages to ensure its effectiveness at various levels and within different contexts. The primary focus was on enhancing performance and leveraging the high speeds offered by the two integrated algorithms.
