TweetFollow Us on Twitter

Thread Manager
Volume Number:10
Issue Number:11
Column Tag:Essential Apple Technology

Related Info: Process Manager Memory Manager

Thread Manager for Macintosh Applications

Apple’s Development Guide

By Apple Computer, Inc.

Note: Source code files accompanying article are located on MacTech CD-ROM or source code disks.

This article will provide the reader with the motivation, architecture, and programmatic interface of the Thread Manager. The architecture section will give some detail of how the Thread Manager is integrated into the Macintosh environment and some of the assumptions made in its design. The programmatic interface will then be described with commentary on the use of each of the routines. The end contains information such as current known bugs and compatibility issues.

Product Definition

The Thread Manager is the current MacOS solution for lightweight concurrent processing. Multithreading allows an application process to be broken into simple subprocesses that proceed concurrently in the same overall application context. Conceptually, a thread is the smallest amount of processor context state necessary to encapsulate a computation. Practically speaking, a thread consists of a register set, a program counter, and a stack. Threads have a fast context switch time due to their minimal context state requirement and operate within the application context which gives threads full application global access. Since threads are hosted by an application, threads within a given application share the address space, file access paths and other system resources associated with that application. This high degree of data sharing enables threads to be "lightweight" and the context switches to be very fast relative to the heavyweight context switches between Process Manager processes.

An execution context requires processor time to get anything done, and there can be only one thread at a time using the processor. So, just like applications, threads are scheduled to share the CPU, and the CPU time is scheduled in one of two ways. the Thread Manager will provide both cooperative and preemptive threads. Cooperative threads explicitly indicate when they are giving up the CPU. Preemptive threads can be interrupted and gain control at (most) any time. The basis for the difference is that there are many parts of the MacOS and Toolbox that can not function properly when interrupted and/or executed at arbitrary times. Due to this restriction, threads using such services must be cooperative. Threads that do not use the Toolbox or OS may be preemptive.

Cooperative threads operate under a scheduling model similar to the Process Manager, wherein they must make explicit calls for other cooperative threads to get control. As a result, they are not limited in the calls they can make as long as yielding calls are properly placed. Preemptive threads operate under a time slice scheduling model; no special calls are required to surrender the CPU for other preemptive or cooperative threads to gain control. For threads which are compute-bound or use MacOS and Toolbox calls that can be interrupted, preemptive threads may be the best choice; the resulting code is cleaner than if partial results were saved and control then handed off to other threads of control.

Part 1: requirements summary

The Thread Manager is an operating system enhancement that will allow applications to make use of both cooperative & preemptive multitasking within their application context on all 680x0 based Macintosh platforms, and cooperative multitasking on PowerPC based Macintoshes. There are two basic types of threads (execution contexts) available: cooperative and preemptive. The different types of threads are distinguished by their scheduling models.

The benefits of per-application multitasking are numerous. Many applications are best structured as several independent execution contexts. An image processing application might want to run a filter on a selected area and still allow the user to continue work on another portion of an image. A database application may allow a user to do a search while concurrently adding entries over a network. With the Thread Manager, it is now possible to always make applications responsive to the user, even while executing other operations. The Thread Manager also gives applications an easy way to organize multiple instances of same or similar code. In each example it is possible to write the software as one thread of execution, however, application code may be simplified by writing each class of operation as a separate thread and letting the Thread Manager handle the interleaving of the threaded execution contexts.

These examples are not intended to be exhaustive, but they indicate the opportunities to exploit the Macintosh system and build complex applications with this technology. The examples show that the model for multiple threads of control must support a variety of applications and user environments. The Thread Manager architecture will, where possible, use the current Macintosh programming paradigms and preserve software compatibility. The Thread Manager enhances the programming model of the Macintosh for there is little need to develop Time Manager or VBL routines to provide the application with a preemptive execution context. There is also no need to save the complete state of a complex calculation in order to make WaitNextEvent or GetNextEvent calls to be user responsive - simply yield to give the main application thread a chance to handle interface needs.

Hardware Compatibility

The 680x0 version of the Thread Manager has the same hardware requirements as System 7.0, that is, at least 2 megabytes of memory, and a Macintosh Plus or newer CPU. The power version of the Thread Manager requires any Power Macintosh.

Software Compatibility

System 7.0 or greater is required for the 680x0 version of the Thread Manager to operate. The power version of the Thread Manager requires system software for Power Macintosh platforms.

Existing applications that know nothing about the Thread Manager have nothing to fear. The extent of the Thread Manager's influence is to set up the main application thread when the application is launched, and to make an appearance every so often as the preemption timer fires off. Because there is only the application thread, the preemption timer has nothing to do and quietly returns. Thus, the Thread Manager is nearly transparent to existing applications, and no compatibility concerns are expected. New applications, of course, can reap the full benefits of concurrent programming, including a fairly powerful form of multitasking.

The power version of the Thread Manager is built as a Shared Library, named ThreadsLib, that is fully integrated into the Thread Manager.

Intended Users

Developers will gain the ability to have multiple, concurrent, logically separate threads of execution within their application. The Thread Manager will provide Macintosh developers with a state of the art foundation for building the next generation of applications using a multi-threaded application programming environment. Another, less obvious user is system software which operates in the application context. The rule of thumb is: Code which operates only within an application context can use the Thread Manager, code that does not, can not.

Programmatic Interface Description

The Thread Manager performs creation, scheduling and deletion of threads. It allows multiple independent threads of execution within an application, each having its own stack and state. The client application can change the scheduler or context switch parameters to optimize an application for a particular usage pattern.

Applications will interface with the Thread Manager through the use of the Trap mechanism we know and love. The API is well defined, compelling, and easy to use-no muss, no fuss. For those who need to get down and dirty, the Thread Manager provides routines to modify the behavior of the scheduling mechanism and context switching code.

The API goes through a single trap: ThreadDispatch. Parameters are transferred on the stack and all routines return an OSErr as their result. The trap dispatch numbers have both a parameter size and a routine number encoded in them which allows older versions of the Thread Manager to safely reject calls implemented only by newer versions. A paramErr is returned for calls not implemented.

The ThreadsLib is a shared library so there is no performance hit due to a trap call, when using the Thread Manager API. There is a distinction between 680x0 threads and power threads. A 680x0 application may only use 680x0 threads, and power applications may only use power threads. Mixing thread types (power or 680x0) within an application type (power or 680x0) is considered a programming error and is not supported.

Performance

The context switch time for an individual thread is negligible due to the minimal context required for a switch. The default context saved by the Thread Manager includes all processor data, address, and FPU (when required) registers. The thread context may be enhanced by the application (to include application specific context) which will increase context switch times (your mileage may vary).

Both cooperative and preemptive threads are eligible for execution only when the application is switched in by the process manager. In this way, all threads have the full application context available to them and are executed only when the application gets time to run.

The interleave design of one cooperative context between every preemptive context guarantees that threads which can use the Toolbox (cooperative threads) will be given CPU time to enhance user interface performance.

Part 2: Functional specifications

Features Overview

Per-application thread initialization is completed prior to the entering the application, which allows applications to begin using the Thread Manager functions as soon as their main routine begins execution. Thread clean up is not required as this is done by the Thread Manager at application termination time.

Applications are provided with general purpose routines for thread pool creation, counting, allocation and deletion. Basic scheduling routines are provided to acquire the ID of the currently executing thread and to yield to any thread. Preemptive thread scheduling routines allow a thread to disable preemption during critical sections of code. Advanced scheduling routines give the ability to yield to a particular thread, and get & set the state of any thread. Mechanisms are also provided to customize the thread scheduler and add custom context switching routines.

Software Design & Technical Description

Installation: During system startup, the Thread Manager is installed into the system and sets up system-wide globals and patch code.

Initialization: Per-application initialization is done prior to entering the application. This allows applications to take advantage of the Thread Manager functions as soon as they begin execution of the main application thread. Important: The Memory Manager routine MaxApplZone must be called before any thread other than the main application thread allocates memory, or causes memory to be allocated (see the Constraints, Gotchas & Bugs section for more information).

Cleanup: The Thread Manager is called by the Process Manager when an application terminates. This gives the Thread Manager a chance to tear down the threading mechanism for the application and return appropriate system resources, such as memory.

Control: The Thread Manager gets control in three ways. The straightforward way is through API calls made by a threaded application. All calls to the Thread Manager are made through the trap ThreadDispatch (0xABF2). The less straightforward way is via hardware interrupts to give the Thread Manager preemption scheduler a chance to reschedule preemptive threads. For power applications, the Thread Manager is called through the use of library calls to the Thread Manager shared library.

Thread Types: The Thread Manager allows applications to create and begin execution of two types of threads: cooperative and preemptive. Cooperative threads make use of a cooperative scheduling model and can make use of all Toolbox and operating system functions. This type of thread has all the rights and privileges of regular application code, which includes the use all Toolbox and OS features available to applications today. For 680x0 applications only, preemptive threads make use of a preemptive scheduling model and may not make use of Toolbox or operating system services; only those Toolbox or operating system services which may be called from an interrupt service routine may be called by preemptive threads. The Toolbox and OS calling restrictions include traps like LoadSeg which get called on behalf of your application when an unloaded code segment needs to be loaded. Important: Be sure to preload all code segments that get used by preemptive threads. Also note that preemptive threads, like interrupt service routines, may not make synchronous I/O requests.

Main Application Thread: The main application thread is a cooperative thread and contains the main entry point into the application. This thread is guaranteed to exist and can not be disposed of. All applications will have one main application thread, even if they are not aware of the Thread Manager. The main application thread is defined to be responsible for event gathering (via WaitNextEvent or GetNextEvent). If events are pending in the application event queue when a generic yield call is made (no thread ID is specified) by another cooperative thread, the Thread Manager scheduler chooses the main application thread as the next cooperative thread to run. This gives the main application thread a chance to handle events for user responsiveness.

Memory Management: The Thread Manager provides a method of creating a pool of threads. This allows the application to create a thread pool early in its execution before memory has been used or overly fragmented. Threads may be removed from the thread pool on a stack size best fit or exact match basis for better thread pool management. Thread data structures can be allocated at most any time, provided the Memory Manager routine MaxApplZone has been called (see the Constraints, Gotchas & Bugs section for more information). Important: It is considered a programming error to allocate memory, or cause memory to be allocated, during preemptive execution time or from any thread other than the main application thread before MaxApplZone has been called.

Thread stack requirements are determined by the type of thread being created and the application’s specific use of that thread. The stack size of a thread is entirely up to the developer - the Thread Manager can only let the developer know the default size and the currently available thread stack space. Cooperative threads may make Toolbox and OS calls which require a larger stack than threads which can not make such calls. Stack based parameter passing from a thread is fully supported since the Thread Manager does not BlockMove thread stacks in and out of the application’s main stack area. Each thread has its own stack which does not move once allocated.

Scheduling: All scheduling occurs in the context of the currently executing application. When the application gets time to run via the Process Manager, the application’s threads get time via the Thread Manager. Applications which are sleeping, and hence are not scheduled by the Process Manager, do not get their threads executed. Threads are per-application: when the application gets time, its threads get time.

Cooperative and preemptive threads are not given a priority and are scheduled in a round-robin fashion or as dictated by a “yield to” call or a custom scheduler. Both types of threads are guaranteed to begin execution in the normal operating mode of the application. Normal operating mode is defined as the addressing and CPU operation modes into which the application was launched. The operating mode will either be 24 or 32-bit MMU addressing mode, and user or supervisor CPU execution mode. At preemptive reschedule time, the addressing mode of the thread is restored to its preempted state. The CPU operating mode is not changed; rescheduling will only take place if the current thread is executing in the normal application CPU operating mode. If the normal operating mode of the CPU is user mode, and the current thread is executing in supervisor mode when preemption occurs, the Thread Manager does not reschedule and will return control back to the interrupted thread.

Cooperative threads get time when an explicit yield call is made to cause a context switch. All the rules that apply to WaitNextEvent or GetNextEvent hold true for yield calls across cooperative threads. For example, no assumptions can be made about the placement of unlocked handles.

Preemptive threads are not required to make yield calls to cause a context switch (although they certainly may) and share 50% of their CPU time with the currently executing cooperative thread. However, calling yield from a preemptive thread is desirable if that thread is not currently busy.

With the advent of multiply threaded applications comes the issue of data coherency. This is a problem where one thread of execution (either cooperative or preemptive) is looking at shared data while another thread is changing it. The Thread Manager provides a solution to this problem by providing the application with the ability to define a “critical” section of code which locks out preemption. With preemption disabled, a thread may look at or change shared or global data safely. The “critical” code mechanism is provided through the use of the ThreadBeginCritical and ThreadEndCritical calls. ThreadBeginCritical increments a counter semaphore and tells the Thread Manager to lock out the preemption mechanism. ThreadEndCritical does just the opposite - when the counter semaphore reaches zero, the preemption mechanism is re enabled. ThreadBeginCritical/ThreadEndCritical pair provides developers with the building blocks needed for direct semaphore support.

Writing A Custom Thread Scheduler Routine: Preemption is disabled when the custom scheduler is called which prevents, among other things, reentrancy problems. There should be no yield or other scheduling calls made at this time. The custom scheduler is provided with a data record defining thread ID information which includes the size of the data record (for important future directions), the current thread ID, the suggested thread ID (which may be kNoThreadID), and the currently interrupted cooperative thread (or kNoThreadID). In addition to this information the custom scheduler must have knowledge about the threads it wishes to schedule. If the custom scheduler does not wish to select a thread it can pass back the suggested thread ID (or kNoThreadID) as the thread to schedule and let the Thread Manager's default scheduler decide. If the custom scheduler does not know about all of the threads belonging to the application (it may not if the system creates threads on behalf of the application), it should occasionally send back the suggested thread ID (or kNoThreadID) to give other threads a chance to be scheduled. Note that due to the round robin scheduling approach, the ‘other’ threads are not guaranteed to be next in line for scheduling. If the interrupted cooperative thread ID variable is not ‘nil’, the custom scheduler was called during the execution of a preemptive thread and must not schedule a cooperative thread other than the interrupted cooperative thread. Scheduling a cooperative thread at this time would effectively be causing cooperative thread preemption which could result in a system misunderstanding (crash).

Important: Scheduling with native threads is less complicated because there are no preemptive threads. Meaning the only way rescheduling happens is when a thread yields to any other thread.

Context: The default context of a thread consists of the CPU registers, the FPU registers if an FPU is present, and a few lowmem globals. Specifically, the saved data is as follows:

CPU Registers FPU Registers

RD0 - RD7 FPCR, FPSR, FPIAR

RA0 - RA7 FP0 - FP7

SR (incl. CCR) FPU frame

For power applications, the context looks something like this:

CPU Registers FPU Registers Machine Registers

R0-R31 FP0-FP31 CTR, LR, PC

FPSCR CR, XER

The thread context lives on a thread’s A7 stack and the location of the thread context is saved at context switch time. The A5 register which contains a pointer to the application’s “A5 world” and the initial thread MMU mode is initially set the same as the main application thread. This allows all threads to share in the use of the application’s A5 world which gives threads access to open files and resource chains, for example. The MMU mode of a thread is saved away, and the mode of the interrupted thread is restored to allow preemption of threads which change the MMU operating mode. The FPU context is fully saved along with the current FPU frame.

Writing a Custom Thread Context Switching Routine: Preemption is disabled when the custom switching routine is called which prevents, among other things, reentrancy problems. There should be no yield or other scheduling calls made at this time. When a custom context switching routine is called, thread context is in transition, so calls to GetCurrentThread and uses of kCurrentThreadID will not be appropriate. Custom switching routines are defined on a per-thread in or out basis. Each thread is treated separately, which allows threads to mix and match custom switchers and parameters. A custom context switcher may be defined for entering a thread and another for exiting the same thread. Each context switching procedure is passed a parameter to be used at the application’s discretion. For example, there could be one custom switching routine that is installed with a different parameter on each thread.

Note: If a custom thread switcher-inner is installed, it will be called before the thread begins execution at the thread entry point.

Important: The entire context is saved by ThreadsLib for any native application. This is due to the fact that compilers can use all the registers during optimization, even the floating point ones.

Programmatic Interface

Data Types

The Thread Gestalt selector and bit field definition are used to determine if the threads package is installed. The gestaltThreadsPresent bit in the result will be true if the Thread Manager is installed. Other bits in the result field are reserved for future definition.


/* 1 */
CONST
 gestaltThreadMgrAttr= ‘thds’;{Thread Manager attributes}
 gestaltThreadMgrPresent= 0;{bit true if Threads present}
 gestaltSpecificMatchSupport= 1; {bit true if ‘exact match’ API supported}
 gestaltThreadsLibraryPresent= 2;  {bit true if ThreadsLib is present}

The ThreadState data type indicates the general operational status of a thread. A thread may be waiting to execute, suspended from execution, or executing.


/* 2 */
TYPE
 ThreadState= INTEGER;

CONST
 kReadyThreadState = 0;   {thread is eligible to run}
 kStoppedThreadState = 1; {thread is not eligible to run}
 kRunningThreadState = 2; {thread is running}

The ThreadTaskRef is used to allow calls to the Thread Manager at a time when the application context is not necessarily the current context.


/* 3 */
TYPE
 ThreadTaskRef = Ptr;

The ThreadStyle data type indicates the broad characteristics of a thread. A cooperative thread is one whose execution environment is sufficient for calling Toolbox routines (this requires a larger stack, for example). A preemptive thread is one that does not need to explicitly yield control, and executes preemptively with all other threads.


/* 4 */
TYPE
 ThreadStyle= LONGINT;

CONST
 kCooperativeThread= 1; {thread can use Macintosh Toolbox}
 kPreemptiveThread = 2;   {thread doesn't necessarily yield}

Note: kPreemptiveThread is not defined for use with power Thread Manager.

The ThreadID data type identifies individual threads. ThreadIDs are unique within the scope of the application process. There are a few pre-defined symbolic thread IDs to make the interface easier.


/* 5 */
TYPE
 ThreadID = LONGINT;

CONST
 kNoThreadID= 0; {no thread at all}
 kCurrentThreadID= 1;{thread whose context is current}
 kApplicationThreadID= 2; {thread created for app at launch}

The ThreadOptions data type specifies options to the NewThread routine.


/* 6 */
TYPE
 ThreadOptions = LONGINT;

CONST
 kNewSuspend     = 1;{begin new thread in stopped state}
 kUsePremadeThread = 2; {use thread from supply}
 kCreateIfNeeded = 4;{allocate if no premade exists}
 kFPUNotNeeded   = 8;{don’t save FPU context}
 kExactMatchThread = 16;  {force exact match over best fit}

The Following information is supplied to a custom scheduler.

Note: kFPUNotNeeded is ignored by the power Thread Manager because floating point registers are always saved.


/* 7 */
TYPE
SchedulerInfoRecPtr= ^SchedulerInfoRec;
SchedulerInfoRec = RECORD
 InfoRecSize:    LONGINT;
 CurrentThreadID:ThreadID;
 SuggestedThreadID:ThreadID;
 InterruptedCoopThreadID: ThreadID;
 END;

The following are the type definitions for a thread's entry routine, a custom scheduling routine, custom context switching routine, and a thread termination routine.


/* 8 */
TYPE 
 ThreadEntryProcPtr= ProcPtr; {entry routine}
 { FUNCTION ThreadMain (threadParam: LONGINT): LONGINT; }

 ThreadSchedulerProcPtr = ProcPtr; {custom scheduler}
 { FUNCTION ThreadScheduler (schedulerInfo: SchedulerInfoRec): ThreadID; 
}

 ThreadSwitchProcPtr = ProcPtr;    {custom switcher}
 {PROCEDURE ThreadSwitcher (threadBeingSwitched: ThreadID; 
 switchProcParam: LONGINT);}

 ThreadTerminationProcPtr = ProcPtr; {custom switcher}
 { PROCEDURE ThreadTerminator (threadTerminated: ThreadID; 
 terminationProcParam:LONGINT);}

The following are the type definitions to allow a debugger to watch the creation, deletion and scheduling of threads on a per-application basis.


/* 9 */
TYPE
 DebuggerNewThreadProcPtr = ProcPtr;
 { PROCEDURE DebuggerNewThread (threadCreated: ThreadID); }

 DebuggerDisposeThreadProcPtr = ProcPtr;
 { PROCEDURE DebuggerDisposeThread (threadCreated: ThreadID); }

 DebuggerThreadSchedulerProcPtr = ProcPtr;
 { FUNCTION DebuggerThreadScheduler(schedulerInfo: SchedulerInfoRec): 

 ThreadID; }
The following are Thread Manager specific errors.

CONST
 threadTooManyReqsErr= -617;
 threadNotFoundErr = -618;
 threadProtocolErr = -619;

General Purpose Routines

These routines allow the application to create, initiate, and delete threads.


/* 10 */
FUNCTION CreateThreadPool (threadStyle: ThreadStyle;
 numToCreate:  INTEGER; stackSize: Size):OSErr;

CreateThreadPool creates a specified number of threads having the given style and stack requirements. The thread structures are put into a supply for later allocation by the NewThread routine. This function may be called repeatedly, which will add threads to the one application thread pool. A pool of threads may be needed if, for example, preemptive threads need to spawn threads. Preemptive threads may only create new threads from an existing thread pool; this is to prevent Toolbox reentrancy if memory allocation must be made to satisfy the request.

If not all of the threads could be created, none are allocated (it’s all or nothing!).

Note: Threads in the allocation pool can not be individually targeted by any of the Thread Manager routines (i.e. they are not associated with ThreadIDs). The only routines that refer to threads in the allocation pool are NewThread and GetFreeThreadCount, but they address the application pool as a whole.

Note: The stackSize parameter is the requested stack size for this set of pooled threads. This stack must be large enough to handle saved thread context, normal application stack usage, interrupt handling routines and CPU exceptions. By passing in a stackSize of zero <0>, the Thread Manager will use its default stack size for the type of threads being created. To determine the default stack size for a particular thread type, see the ThreadDefaultStackSize routine for more information.


/* 11 */
Result codes:  noErr Specified threads were created and are available
 memFullErr Insufficient memory to create the thread structures
 paramErr Unknown threadStyle, or using kPreemptiveThread 
 with the power Thread Manager

FUNCTION GetFreeThreadCount (threadStyle: ThreadStyle;
 VAR freeCount: INTEGER):OSErr;

GetFreeThreadCount finds the number of threads of the given thread style that are available to be allocated. The number of available threads is raised by a successful call to CreateThreadPool or DisposeThread (with the recycleThread parameter set to “true”). The number is lowered by calls to NewThread when a pre-made thread is allocated.


/* 12 */
Result codes:  noErr freeCount has the count of available threadStyle 
threads
 paramErr Unknown threadStyle, or using kPreemptiveThread
 with the power Thread Manager

FUNCTION GetSpecificFreeThreadCount(threadStyle: ThreadStyle;
 stackSize: Size; VAR freeCount: INTEGER):OSErr;

GetSpecificFreeThreadCount finds the number of threads of the given thread style and stack size that are available to be allocated. The number of available threads is raised by a successful call to CreateThreadPool or DisposeThread (with the recycleThread parameter set to “true”). The number is lowered by calls to NewThread when a pre-made thread is allocated.


/* 13 */
Result codes:  noErr freeCount has the count of available threadStyle 
threads
 paramErr Unknown threadStyle, or using
 kPreemptiveThread with the power Thread Manager

FUNCTION GetDefaultThreadStackSize (threadStyle: ThreadStyle;
 VAR stackSize: Size):OSErr;

GetDefaultThreadStackSize returns the default stack needed for the type of thread requested. The value returned is the stack size used if zero <0> is passed into the CreateThreadPool & NewThread stackSize parameter. This value is by no means absolute, and most threads do not need as much stack space as the default value. This and the ThreadCurrentStackSpace routines are provided to help tune your threads for optimal memory usage.


/* 14 */
Result codes:  noErr stackSize has the default stack needed for threadStyle 
threads
 paramErr Unknown threadStyle, or using
 kPreemptiveThread with the power Thread Manager

FUNCTION ThreadCurrentStackSpace (thread: ThreadID;
 VAR freeStack: LONGINT):OSErr;

ThreadCurrentStackSpace returns the current stack space available for the desired thread. Be aware that various system services will run on your thread stack (interrupt routines, exception handlers, etc.) so be sure to account for those in your stack usage calculations. See GetDefaultThreadStackSize for more information.


/* 15 */
Result codes:  noErr freeCount has the amount of stack space available 
in thread
 threadNotFoundErr There is no existing thread with 
 the specified ThreadID

Note: When using this routine from a preemptive thread, care must be taken when obtaining information about another thread. It is not always possible to know the stack environment of a preempted thread - the Toolbox may have temporarily changed stacks to perform its functions.


/* 17 */
FUNCTION NewThread (threadStyle: ThreadStyle;
 threadEntry: ThreadEntryProcPtr;
 threadParam: LONGINT;
 stackSize: Size;
 options: ThreadOptions;
 threadResult: LongIntPtr;
 VAR threadMade: ThreadID):OSErr;

NewThread creates or allocates a thread structure with the specified characteristics, and puts the thread's identifier in the threadMade parameter. The threadEntry parameter is the entry address of the thread, and is best represented as a Pascal-style function. The threadParam parameter is passed as a parameter to that function for application defined uses. When the thread terminates, the function result is put into threadResult (pass nil for threadResult if you are not interested in the thread's result). In the case of an error returned, the threadMade parameter is set to kNoThreadID.

The ThreadOptions parameter specifies optional behavior of NewThread. Thread options are summed together to create the desired combination of options. The kNewSuspend option indicates that the new thread should begin in the kStoppedThreadState, ineligible for execution. The kUsePremadeThread option requests that the new thread be allocated from an existing pool of premade threads. By default, threads allocated from the thread pool are done so on a stack size best fit basis. The kExactMatchThread option requires threads allocated from the pool to have a stack size which exactly matches the stack size requested by NewThread. The kCreateIfNeeded option gives NewThread permission to allocate an entirely new thread if the supply allocation request can not be honored. The kFPUNotNeeded option will prevent FPU context from being saved for the thread. This option will speed the context switch time for a thread that does not require FPU context.

Important: The storage for threadResult needs to be available when the thread terminates. Therefore, an appropriate storage place would be in the application globals or as local variable to the application's main routine. An inappropriate place would be as a local variable to a subroutine that completes before the thread terminates.

Important: Preemptive threads may only call this routine if the kUsePremadeThread option is set.

Note: The stackSize parameter is the requested stack size of the new thread. This stack must be large enough to handle saved thread context, normal application stack usage, interrupt handling routines and CPU exceptions. By passing in a stackSize of zero <0>, the Thread Manager will use its default stack size for the type of threads being created. To determine the default stack size for a particular thread type, see the ThreadDefaultStackSize routine for more information.

Note: ThreadsLib will not allow you to create preemptive threads, as well as it ignores the kFPUNotNeeded option, since all of the native context has to saved.


/* 18 */
Result codes:  noErr Specified thread was made or allocated
 memFullErr Insufficient memory to create the thread structure
 threadTooManyReqsErr There are no matching thread structures available
 paramErr Unknown threadStyle, or using kPreemptiveThread with 
 the power Thread Manager

FUNCTION DisposeThread (threadToDump: ThreadID; threadResult:  LONGINT; 
recycleThread: BOOLEAN):OSErr;

DisposeThread gets rid of the specified thread. The threadResult parameter is passed on to the thread's creator (see NewThread). The recycleThread parameter specifies whether to return the thread structure to the allocation pool supply, or to free it entirely.


/* 19 */
Result codes:     noErr   Specified thread was disposed
 threadNotFoundErr There is no existing thread with the specified ThreadID
 threadProtocolErr ThreadID specified the application thread

Note: Disposing a thread from a preemptive thread will force the disposed thread to be recycled regardless of the recycleThread setting. Returning from a thread causes itself to be disposed.

Basic Scheduling Routines

These routines allow the application to get information about and have basic scheduling control of the current thread, without specific attention to the other threads in the application.


/* 20 */
FUNCTION GetCurrentThread (VAR currentThreadID: ThreadID):OSErr;

GetCurrentThread finds the ThreadID of the current thread, and stores it in the currentThreadID parameter.


/* 21 */
Result codes:     noErr   current ThreadID returned
 threadNotFoundErr There is no current thread

FUNCTION YieldToAnyThread : OSErr;

YieldToAnyThread relinquishes the current thread's control, causing generalized rescheduling. The current thread suspends in the kReadyThreadState, awaiting availability of the CPU. When the thread is again scheduled, this routine regains control and returns to the caller.

Important: Threads must yield in the CPU addressing mode (24 or 32-bit) in which the computer normally operates. However, threads may be preempted in any CPU addressing mode.


/* 22 */
Result codes:     noErr Current thread has yielded and is now running 
again.
 threadProtocolErr Current thread is in a critical section
  (see ThreadBeginCritical)

Preemptive Thread Scheduling Routines

These routines are useful when the application includes preemptive threads.


/* 23 */
FUNCTION ThreadBeginCritical : OSErr;

ThreadBeginCritical indicates to the Thread Manager that the current thread is entering a critical section with respect to all other threads in the current application. This prevents preemptive scheduling to prevent interference from the other threads. Note that this routine is not needed if there are no active preemptive threads in the application.

Note: Critical sections may be nested.

Important: Preemptive threads may be interrupted to execute a cooperative thread, so critical sections can exist in them, as well.


/* 24 */
Result codes:  noErr Current thread can now execute critical section

FUNCTION ThreadEndCritical : OSErr;

ThreadEndCritical indicates to the Thread Manager that the current thread is exiting a critical section.


/* 25 */
Result codes:  noErr Current thread is now out of most nested critical 
section
 threadProtocolErr Current thread is not in a critical section
  (see ThreadBeginCritical)

Advanced Scheduling Routines

These routines allow the application to schedule threads with a greater control and responsibility. Typically, an application-wide view of threads is needed when applying these routines.


/* 26 */
FUNCTION YieldToThread (suggestedThread: ThreadID):OSErr;

YieldToThread relinquishes the current thread's control, causing generalized rescheduling, but passes the suggestedThread to the scheduler. The current thread suspends in the kReadyThreadState, awaiting availability of the CPU. When the thread is again scheduled, this routine regains control and returns to the caller.

Important: Threads must yield in the CPU addressing mode (24 or 32-bit) in which the computer normally operates. Preemptive threads should never explicitly yield to cooperative threads. Doing so would in effect be causing preemption between cooperative threads.


/* 27 */
Result codes:     noErr Current thread has yielded and is now running 
again.
 threadNotFoundErr There is no existing thread with the specified id,
 or the suggested thread is not in the ready state.
 threadProtocolErr Current thread is in a critical section (see 
 ThreadBeginCritical)

FUNCTION GetThreadState (threadToGet: ThreadID; 
 VAR  threadState:ThreadState):OSErr;

In the presence of preemptive threads, the state of any thread can change asynchronously (at any time). This implies that the value returned from GetThreadState might be inaccurate by the time the caller checks it. If absolute correctness is required, this call should be made while preemptive scheduling is disabled, such as in a critical section (delimited by ThreadBeginCritical & ThreadEndCritical) or during the custom scheduling routine (see SetThreadScheduler).


/* 28 */
Result codes:     noErr   threadState contains the specified thread's 
state
 threadNotFoundErr There is no existing thread with the specified ThreadID

FUNCTION SetThreadState (threadToSet: ThreadID; 
 newState:ThreadState;  suggestedThread: ThreadID):OSErr;

SetThreadState puts the specified thread into the specified state. If the current thread is specified, and newState is either kReadyThreadState or kStoppedThreadState, rescheduling occurs and suggestedThreadID is passed on to the scheduler.

Important: Threads must yield in the CPU addressing mode (24 or 32-bit) in which the computer normally operates.


/* 29 */
Result codes:     noErr   Thread was put in the specified state.  If 
this was the
 current thread, it is now running again
 threadNotFoundErr There is no existing thread with the specified 
 ThreadID, or the suggested thread is not in the               
 ready state.
 threadProtocolErr Caller attempted to suspend/stop the desired thread, 
 but the desired thread is in a critical section (see
 ThreadBeginCritical), or newState is an invalid state.

FUNCTION SetThreadStateEndCritical (threadToSet: ThreadID;
   newState:ThreadState; suggestedThread: ThreadID):OSErr;

SetThreadStateEndCritical atomically puts the specified thread into the specified state and exits the currents thread’s critical section. If the current thread is specified, and newState is either kReadyThreadState or kStoppedThreadState, rescheduling occurs and suggestedThreadID is passed on to the scheduler. This call is useful in cases where the current thread needs to put itself in a stopped state at the end of a critical section, thereby closing the scheduling window between a call to ThreadEndCritical and SetThreadState.

Important: Threads must yield in the CPU addressing mode (24 or 32-bit) in which the computer normally operates.


/* 30 */
Result codes:     noErr   Thread was put in the specified state.  If 
this was the     current thread, it is now running again
 threadNotFoundErr There is no existing thread with the specified 
 ThreadID, or the suggested thread is not in the               
 ready state.
 threadProtocolErr Current thread is not in a critical section (see 
 ThreadBeginCritical), or newState is an invalid state.

FUNCTION GetThreadCurrentTaskRef ( VAR threadTRef:       
 ThreadTaskRef):OSErr;

GetThreadCurrentTaskRef returns an application process reference for later use, potentially at interrupt time. This task reference will allow the Thread Manager to get & set information for a particular thread during any application context.


/* 31 */
Result codes:  noErr Thread task reference was returned

FUNCTION GetThreadStateGivenTaskRef ( threadTRef: ThreadTaskRef; 
 threadToGet: ThreadID; VAR threadState: ThreadState ):OSErr;

GetThreadStateGivenTaskRef returns the state of the given thread in a particular application. The primary use of this call is for completion routines or interrupt level code which must acquire the state of a given thread at times when the application context is unknown.


/* 32 */
Result codes:     noErr   threadState contains the specified thread's 
state
 threadNotFoundErr There is no existing thread with the specified 
 ThreadID & TaskRef
 threadProtocolErr Caller passed in an invalid TaskRef.

FUNCTION SetThreadReadyGivenTaskRef( threadTRef: ThreadTaskRef; 
 threadToSet: ThreadID ):OSErr;

SetThreadReadyGivenTaskRef will mark a stopped thread as ready and eligible to run, but will not be put in the ready queue until the next time rescheduling occurs. Threads marked as stopped are the only types eligible to be marked as ready by this routine. An example use of this routine is to allow a completion routine to unblock the thread which stopped itself after making an asynchronous I/O call.


/* 33 */
Result codes:     noErr   The specified thread is marked as ready.
 threadNotFoundErr There is no existing thread with the specified ThreadID
 & TaskRef
 threadProtocolErr Caller attempted to mark a thread ready that is not 
in      the stopped state, or caller passed in an invalid 
 TaskRef. 

FUNCTION SetThreadScheduler (threadScheduler:
 ThreadSchedulerProcPtr):OSErr;

SetThreadScheduler installs a custom thread scheduler, replacing any current custom scheduler. A threadScheduler of nil specifies “none”.

Important: The application A5 global pointer is not guaranteed to be the value in the CPU’s A5 register when the Thread Manager calls back to your custom scheduler. Be sure to set up register A5 before accessing global data.


/* 34 */
Result codes:  noErr Specified scheduler was installed

FUNCTION SetThreadSwitcher (thread: ThreadID;
 threadSwitcher: ThreadSwitchProcPtr;
 switchProcParam: LONGINT; inOrOut: BOOLEAN):OSErr;

SetThreadSwitcher installs a custom thread context switching routine for the specified thread in addition to the standard processor context which is always saved. A threadSwitcher of nil specifies “none”. The inOrOut parameter indicates whether the routine is to be called when the thread is switched in (inOrOut is “true”), or when the thread is switched out (inOrOut is “false”). The switchProcParam specifies a parameter to be passed to the thread switcher.

Each thread is treated separately, so threads are free to mix and match custom switchers and parameters. For example, there could be one custom switching routine that is installed with a different parameter on each thread.

Important: The application A5 global pointer is not guaranteed to be the value in the CPU’s A5 register when the Thread Manager calls back to your custom switcher. Be sure to set up register A5 before accessing global data.


/* 35 */
Result codes:     noErr   Specified thread switcher was installed
 threadNotFoundErr There is no existing thread with the specified ThreadID

FUNCTION SetThreadTerminator (thread: ThreadID;
 threadTerminator: ThreadTerminationProcPtr;
 terminationProcParam: LONGINT):OSErr;

SetThreadTerminator installs a custom thread termination routine for the specified thread. The custom thread termination routine will be called at the time a thread is exited or is manually disposed of. The terminationProcParam specifies a parameter to be passed to the thread terminator.

Each thread is treated separately, so threads are free to mix and match custom terminators and parameters. For example, there could be one custom termination routine that is installed with a different parameter on each thread.


/* 36 */
Result codes:     noErr   Specified thread terminator was installed
 threadNotFoundErr There is no existing thread with the specified ThreadID

Thread Debugging Support

The following routine is set aside for debuggers to install watchdog procedures when the major state of a thread changes. These routines are reserved for use by debuggers to help in the development of multithreaded applications.


/* 37 */
FUNCTION SetDebuggerNotificationProcs (
 notifyNewThread: DebuggerNewThreadProcPtr;
 notifyDisposeThread: DebuggerDisposeThreadProcPtr;            
 notifyThreadScheduler: DebuggerThreadSchedulerProcPtr 
 ):OSErr;

SetDebuggerNotificationProcs sets the per-application support for debugger notification of thread birth, death and scheduling. The debugger will be notified with the threadID of the newly created or disposed of thread. The debugger is also notified if the thread simply returns from its highest level of code and thus automatically disposes itself. The DebuggerThreadSchedulerProcPtr will be called after the custom scheduler and the Thread Manager's generic scheduler have decided on a thread to schedule. In this way, the debugger gets the last shot at a scheduling decision.

Important: All three debugger callbacks are installed when this call is made. It is not possible to set one or two of the callbacks at a time with this routine, and it is not possible to chain these routines. This restriction ensures that the last caller of this routine owns all three of the callbacks. Setting a procedure to NIL will effectively disable it from being called. Also note that the application A5 global pointer is not guaranteed to be the value in the CPU’s A5 register when the Thread Manager calls back to your debugger procedures.


/* 38 */
Result codes:  noErr Debugger procs have been installed
Routines That Move Or Purge Memory:

CreateThreadPool
NewThread
DisposeThread    - When ‘recycleThread’ is false

Routines You Can Call During Preemptive Thread Execution:

NewThread - When ‘kUsePremadeThread’ is used
DisposeThread    - When ‘recycleThread’ is true
GetCurrentThread
GetFreeThreadCount
GetDefaultThreadStackSize
ThreadCurrentStackSpace
GetThreadState
SetThreadState   - See note below
SetThreadStateEndCritical - See note below
ThreadBeginCritical
ThreadEndCritical
YieldToAnyThread
YieldToThread    - See note below
SetThreadScheduler
SetThreadSwitcher
SetDebuggerNotificationProcs
GetThreadCurrentTaskRef

Note: The SetThreadState, SetThreadStateEndCritical, and YieldToThread routines are usable during preemptive execution only when suggestedThread parameter is either kNoThreadID or is another preemptive thread. Explicitly requesting a cooperative thread to run from a preemptive thread is dangerous and should be avoided.


/* 39 */
Routines You Can Call At Interrupt Time:

GetThreadStateGivenTaskRef
SetThreadReadyGivenTaskRef

Toolbox & OS Routines You Can Call From a Cooperative Thread:
• All routines are available from a cooperative thread after MaxApplZone 
has been called.
• On a Mac Plus, only the main application thread may make resource manager 
calls (explicitly UpdateResFile & CloseResFile).
Toolbox & OS routines You Can Call From a Preemptive Thread:
Preemptive threads must follow the same rules as interrupt service routines 
as to which Toolbox and OS calls they may make.

Part 3 - Gotchas & Bugs

Gotcha: The Memory Manager routine MaxApplZone must be called before any thread other than the main application thread allocates memory, or causes memory to be allocated. See Inside Macintosh Memory for information on using memory and expanding the application heap.

Gotcha: Making certain calls to the Toolbox & OS during preemptive thread execution is a programming error; calls which may not be made at interrupt time may not be made by a preemptive thread. This includes calls to LoadSeg which get made on behalf of the application when accessing code segments which have not yet been loaded into memory. Applications must be sure that all code segments used by preemptive threads are preloaded and do not get unloaded. One method if insuring certain traps do not get called at the wrong time is to define custom context switchers for preemptive threads. A custom context switcher-inner could be written to save and change the trap address of the trap in question (say LoadSeg) to a routine which drops into the debugger if that trap gets called while the thread is switched in. A custom context switcher-outer would then restore the original trap address for the rest of the application.

Gotcha: On a Mac Plus only, the main application thread should be the only thread which makes use of the Resource Manager. Specifically, calls to UpdateResFile or CloseResFile should only be made by the main application thread on a Mac Plus. All other Macintoshes support Resource Manager calls from any cooperative thread.

Gotcha: The application A5 global pointer is not guaranteed to be the value in the CPU’s A5 register when the Thread Manager calls back to your custom call back routines. Be sure to set up register A5 before accessing global data from a custom scheduler, custom switchers, termination procedures, and debugger call back routines.

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Latest Forum Discussions

See All

Call of Duty Warzone is a Waiting Simula...
It's always fun when a splashy multiplayer game comes to mobile because they are few and far between, so I was excited to see the notification about Call of Duty: Warzone Mobile (finally) launching last week and wanted to try it out. As someone who... | Read more »
Albion Online introduces some massive ne...
Sandbox Interactive has announced an upcoming update to its flagship MMORPG Albion Online, containing massive updates to its existing guild Vs guild systems. Someone clearly rewatched the Helms Deep battle in Lord of the Rings and spent the next... | Read more »
Chucklefish announces launch date of the...
Chucklefish, the indie London-based team we probably all know from developing Terraria or their stint publishing Stardew Valley, has revealed the mobile release date for roguelike deck-builder Wildfrost. Developed by Gaziter and Deadpan Games, the... | Read more »
Netmarble opens pre-registration for act...
It has been close to three years since Netmarble announced they would be adapting the smash series Solo Leveling into a video game, and at last, they have announced the opening of pre-orders for Solo Leveling: Arise. [Read more] | Read more »
PUBG Mobile celebrates sixth anniversary...
For the past six years, PUBG Mobile has been one of the most popular shooters you can play in the palm of your hand, and Krafton is celebrating this milestone and many years of ups by teaming up with hit music man JVKE to create a special song for... | Read more »
ASTRA: Knights of Veda refuse to pump th...
In perhaps the most recent example of being incredibly eager, ASTRA: Knights of Veda has dropped its second collaboration with South Korean boyband Seventeen, named so as it consists of exactly thirteen members and a video collaboration with Lee... | Read more »
Collect all your cats and caterpillars a...
If you are growing tired of trying to build a town with your phone by using it as a tiny, ineffectual shover then fear no longer, as Independent Arts Software has announced the upcoming release of Construction Simulator 4, from the critically... | Read more »
Backbone complete its lineup of 2nd Gene...
With all the ports of big AAA games that have been coming to mobile, it is becoming more convenient than ever to own a good controller, and to help with this Backbone has announced the completion of their 2nd generation product lineup with their... | Read more »
Zenless Zone Zero opens entries for its...
miHoYo, aka HoYoverse, has become such a big name in mobile gaming that it's hard to believe that arguably their flagship title, Genshin Impact, is only three and a half years old. Now, they continue the road to the next title in their world, with... | Read more »
Live, Playdate, Live! – The TouchArcade...
In this week’s episode of The TouchArcade Show we kick things off by talking about all the games I splurged on during the recent Playdate Catalog one-year anniversary sale, including the new Lucas Pope jam Mars After Midnight. We haven’t played any... | Read more »

Price Scanner via MacPrices.net

Deal Alert! B&H Photo has Apple’s 14-inch...
B&H Photo has new Gray and Black 14″ M3, M3 Pro, and M3 Max MacBook Pros on sale for $200-$300 off MSRP, starting at only $1399. B&H offers free 1-2 day delivery to most US addresses: – 14″ 8... Read more
Department Of Justice Sets Sights On Apple In...
NEWS – The ball has finally dropped on the big Apple. The ball (metaphorically speaking) — an antitrust lawsuit filed in the U.S. on March 21 by the Department of Justice (DOJ) — came down following... Read more
New 13-inch M3 MacBook Air on sale for $999,...
Amazon has Apple’s new 13″ M3 MacBook Air on sale for $100 off MSRP for the first time, now just $999 shipped. Shipping is free: – 13″ MacBook Air (8GB RAM/256GB SSD/Space Gray): $999 $100 off MSRP... Read more
Amazon has Apple’s 9th-generation WiFi iPads...
Amazon has Apple’s 9th generation 10.2″ WiFi iPads on sale for $80-$100 off MSRP, starting only $249. Their prices are the lowest available for new iPads anywhere: – 10″ 64GB WiFi iPad (Space Gray or... Read more
Discounted 14-inch M3 MacBook Pros with 16GB...
Apple retailer Expercom has 14″ MacBook Pros with M3 CPUs and 16GB of standard memory discounted by up to $120 off Apple’s MSRP: – 14″ M3 MacBook Pro (16GB RAM/256GB SSD): $1691.06 $108 off MSRP – 14... Read more
Clearance 15-inch M2 MacBook Airs on sale for...
B&H Photo has Apple’s 15″ MacBook Airs with M2 CPUs (8GB RAM/256GB SSD) in stock today and on clearance sale for $999 in all four colors. Free 1-2 delivery is available to most US addresses.... Read more
Clearance 13-inch M1 MacBook Airs drop to onl...
B&H has Apple’s base 13″ M1 MacBook Air (Space Gray, Silver, & Gold) in stock and on clearance sale today for $300 off MSRP, only $699. Free 1-2 day shipping is available to most addresses in... Read more
New promo at Visible: Buy a new iPhone, get $...
Switch to Visible, and buy a new iPhone, and Visible will take $10 off their monthly Visible+ service for 24 months. Visible+ is normally $45 per month. With this promotion, the cost of Visible+ is... Read more
B&H has Apple’s 13-inch M2 MacBook Airs o...
B&H Photo has 13″ MacBook Airs with M2 CPUs and 256GB of storage in stock and on sale for $100 off Apple’s new MSRP, only $899. Free 1-2 day delivery is available to most US addresses. Their... Read more
Take advantage of Apple’s steep discounts on...
Apple has a full line of 16″ M3 Pro and M3 Max MacBook Pros available, Certified Refurbished, starting at $2119 and ranging up to $600 off MSRP. Each model features a new outer case, shipping is free... Read more

Jobs Board

Medical Assistant - Surgical Oncology- *Apple...
Medical Assistant - Surgical Oncology- Apple Hill Location: WellSpan Medical Group, York, PA Schedule: Full Time Sign-On Bonus Eligible Remote/Hybrid Regular Apply Read more
Omnichannel Associate - *Apple* Blossom Mal...
Omnichannel Associate - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Read more
Cashier - *Apple* Blossom Mall - JCPenney (...
Cashier - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Blossom Mall Read more
Operations Associate - *Apple* Blossom Mall...
Operations Associate - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Read more
Business Analyst | *Apple* Pay - Banco Popu...
Business Analyst | Apple PayApply now " Apply now + Apply Now + Start applying with LinkedIn Start + Please wait Date:Mar 19, 2024 Location: San Juan-Cupey, PR Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.