Friday, May 28, 2010

Exercise 10 : Concurrency and Threading demonstration in Phyton


1.             Find definitions for eight terms and concepts used in threaded programming:

1.             Thread Synchronisation
Synchronization enables you to control program flow and access to shared data for concurrently executing threads.The four synchronization models are mutex locks, read/write locks, condition variables, and semaphores.
·         Mutex locks allow only one thread at a time to execute a specific section of code, or to access specific data.
·         Read/write locks permit concurrent reads and exclusive writes to a protected shared resource. To modify a resource, a thread must first acquire the exclusive write lock. An exclusive write lock is not permitted until all read locks have been released.
·         Condition variables block threads until a particular condition is true.
·         Counting semaphores typically coordinate access to resources. The count is the limit on how many threads can have concurrent access to the data protected by the semaphore. When the count is reached, the semaphore causes the calling thread to block until the count changes. A binary semaphore (with a count of one) is similar in operation to a mutex lock.

2.             Locks
Most applications require threads to communicate and synchronize their behavior to one another. The simplest way to accomplish this task in a program is with locks. To prevent multiple accesses, threads can acquire and release a lock before using resources. Imagine a lock on the copy machine for which only one worker can possess a key at a time. Without the key, use of the machine is impossible. Locks around shared variables allow threads to quickly and easily communicate and synchronize. A thread that holds a lock on an object knows that no other thread will access that object. Even if the thread with the lock is preempted, another thread cannot acquire the lock until the original thread wakes up, finishes its work, and releases the lock. Threads that attempt to acquire a lock in use go to sleep until the thread holding the lock releases it. After the lock is freed, the sleeping thread moves to the ready-to-run queue.
For example, In Java programming, each object has a lock; a thread can acquire the lock for an object by using the synchronized keyword. Methods, or synchronized blocks of code, can only be executed by one thread at a time for a given instantiation of a class, because that code requires obtaining the object's lock before execution. Continuing with our copier analogy, to avoid clashing copiers, we can simply synchronize access to the copier resource, allowing only one worker access at a time, as shown in the following code sample. We achieve this by having methods (in the Copier object) that modify the copier state be declared as synchronized methods. Workers that need to use a Copier object have to wait in line because only one thread per Copier object can be executing synchronized code.

3.       Deadlock
Deadlocking is a classic multithreading problem in which all work is incomplete because different threads are waiting for locks that will never be released. Imagine two threads, which represent two hungry people who must share one fork and knife and take turns eating. They each need to acquire two locks: one for the shared fork resource and one for the shared knife resource. Imagine if thread "A" acquires the knife and thread "B" acquires the fork. Thread A will now block waiting for the fork, while thread B blocks waiting for the knife, which thread A has. Though a contrived example, this sort of situation occurs often, albeit in scenarios much harder to detect. Although difficult to detect and hash out in every case, by following these few rules, a system's design can be free of deadlocking scenarios:

  •          Have multiple threads acquire a group of locks in the same order. This approach eliminates problems.
  •          Group multiple locks together under one lock. In our case.
  •          Label resources with variables that are readable without blocking.
  •          Most importantly, design the entire system thoroughly before writing code. Multithreading is difficult, and a thorough design before you start to code will help avoid difficult-to-detect locking problems.


4.       Semaphores
Frequently, several threads will need to access a smaller number of resources. For example, imagine a number of threads running in a Web server answering client requests. These threads need to connect to a database, but only a fixed number of available database connections are available. How can you assign a number of database connections to a larger number of threads efficiently? One way to control access to a pool of resources (rather than just with a simple one-thread lock) is to use what is known as a counting semaphore. A counting semaphore encapsulates managing the pool of available resources. Implemented on top of simple locks, a semaphore is a thread-safe counter initialized to the number of resources available for use. For example, we would initialize a semaphore to the number of database connections available. As each thread acquires the semaphore, the number of available connections is decremented by one. Upon consumption of the resource, the semaphore is released, incrementing the counter. Threads that attempt to acquire a semaphore when all the resources managed by the semaphore are in use simply block until a resource is free.

5.       Mutex (mutual exclusion)
Use mutual exclusion locks (mutexes) to serialize thread execution. Mutual exclusion locks synchronize threads, usually by ensuring that only one thread at a time executes a critical section of code. Mutex locks can also preserve single-threaded code.
To change the default mutex attributes, you can declare and initialize an attribute object. Often, the mutex attributes are set in one place at the beginning of the application so the attributes can be located quickly and modified easily.

6.       Thread
A program or process can contain multiple threads that execute instructions according to program code. Like multiple processes that can run on one computer, multiple threads appear to be doing their work in parallel. Implemented on a multi-processor machine, they actually can work in parallel. Unlike processes, threads share the same address space; that is, they can read and write the same variables and data structures.

7.       Event
Events are the actual part or state of the programme. The programme is made up with the series of the events. In the event driven programme the flow of programming resolve by events. In the programme, start from the main event and dispatch or compile the all written events and quiet with the end event.

                   8.      Waitable timer.
This Wait-Event represents the amount of time a user or application has “slept” through the different procedures in many languages. Normally, the waitable timer procedures have keywords like wait, sleep, idle, timer.sleep, LOCK.sleep etc.

References:

Ales Roetter. (2001, February 01). Concurrent programming : Writting multithreaded java application. Retrieved May 27, 2010, from IBM developerWorks: http://www.ibm.com/developerworks/library/j-thread.html#h2

Blaise Barney. (2010, January 14). POSIX Threads Programming. Retrieved May 28, 2010, from Lawrence Livermore National Laboratory : Computing: https://computing.llnl.gov/tutorials/pthreads/#Designing


2.             A simple demonstration of the threading module in Python (threaddemo.py) that uses both a lock and semaphore to control concurrency is by Ted Herman at the University of Iowa.  The code and sample output below are worth a look.  Report your findings.

threaddemo.py

Here, we have total 9 threads which are running simultaneously with the proper use of multitasking. Program includes the use of many concepts such as random numbers, time operations, waitable timers etc. In this programme maximum number of tasks run at the same time is 3. But which task finish first is depends on the priority set randomly in the program. Programmer had done well after printing all the function such as showing how many tasks are running, which task will run for how much time, which thread is finish and which task is now started etc. In the beginning the first three tasks will run with same priority while other tasks are in waiting list. As soon as any task will finish the message will display with current progress and one task will be start and will show in current process. The new task will be start until all remaining tasks started running. After end of each thread, a new thread will be added. Program will run till the end of all threads.

No comments:

Post a Comment