TweetFollow Us on Twitter

Multiprocessing Systems
Volume Number:12
Issue Number:3
Column Tag:Performance Frontiers

A Look at Macintosh Multiprocessing

Three ways to build a “simultaneous screamer”.

By Jim Gochee, Contributing Editor for Performance Processing

Note: Source code files accompanying article are located on MacTech CD-ROM or source code disks.

Information for this article was contributed by: Bruce Lawton, Emerson Kennedy; Dr. Karsten Jeppesen, YARC Systems; and Chris Cooksey, DayStar Digital.

Introduction

Applications looking for more performance than a single-processor computer can deliver often look to multiprocessing. Multiprocessing (MP) can take many forms, from having multiple CPUs on a single motherboard, to plug-in accelerator cards, to a network of machines. This article gives an overview of the multiprocessing options available on the Macintosh today, which just got more interesting with the new Apple Multiprocessor API. With this API, Apple has standardized multiprocessing for the MacOS. However, as a developer looking for the ultimate in performance speedup, you shouldn’t rule out other multiprocessing options just yet. For those of you who have never considered making your application multiprocessor-aware, I would suggest taking a good look at Apple’s Multiprocessor API. It is easy to use, runs under System 7 today, and is sure to have a sizable installed base of hardware that supports it.

Overview

Multiprocessing occurs when more than one compute engine is involved in solving a task. These compute engines can be tightly coupled, as is the case with Symmetric Multiprocessing (SMP), closely coupled, with Asymmetric Multiprocessing (AMP), or loosely coupled, with Distributed Processing (DP). SMP systems have multiple processors on the same system bus. The processors in these systems are cache-coherent, which allows software running on any processor to share main memory and other system resources with minimal extra support. AMP systems are composed of multiple processors on a connected bus; however, the CPUs in this configuration take on a master/client arrangement. Also, each CPU doesn’t necessarily have access to the entire machine. A card plugged into an expansion slot would be a good example of an AMP system. DP environments are composed of isolated compute engines which exchange processing information over a local or wide area network.

Because of the flexibility of SMP and because of its cost being relatively low, this architecture has become the standard for mainstream multiprocessing. Multitasking operating systems can run processes on any CPU in a SMP system because each processor has the same view of the machine. Several flavors of UNIX along with Windows NT have been supporting SMP machines for a while, and with the introduction of the Apple MP API, SMP is also the official Macintosh multiprocessing standard. The Apple Multiprocessor API allows you to create MP tasks which are queued and run on any available processor. If there are more tasks than processors, or if there is just one processor, tasks are preemptively scheduled. The tasking model is a subset of the Copland tasking model, which ensures seamless future compatibility. Coding to the multiprocessor API signals the system that tasks should be run on multiple processors; however, it is likely that Copland will support running non-MP aware tasks on multiple processors as well.

One important consideration is that all of the multiprocessing solutions, as well as Copland multitasking, have severe limits on what a task in these environments can do. Preemptive tasks in any operating system can only access system routines which are designed for reentrancy. Under Copland, preemptive tasks will have access to I/O, memory management, and other kernel services. Therefore, MP tasks running under Copland will also have access to these services. However, under System 7, MP tasks cannot call any part of the MacOS. This may sound odd because there are parts of the MacOS under System 7 that are reentrant, i.e. anything that you can call from interrupt handlers. However, these calls contain 68k code, and reentrancy within 68k code isn’t guaranteed by Apple in the current or future implementations of the MacOS. So for now, MP tasks running under System 7 will be limited to scanning and processing shared memory.

Vendor Section

As a software developer looking for more performance, it is important to understand what kind of multiprocessing is available and what flavor is appropriate for your application. There are three major MP vendors for the Macintosh market. They are: DayStar Digital with their Apple-compliant SMP hardware; YARC systems with high speed accelerator boards; and PowerTap, which allows networked distributed processing.

The DayStar/Apple combination is the newest, and in many ways the most compelling, because of its simplicity, versatility, and compatibility with Copland. DayStar did much of the design and implementation of the new API and library; however, Apple now claims ownership for the code and guarantees its support in future releases of the MacOS. Use of the library gives you access to SMP-compliant systems under System 7 and Copland, while also allowing preemptive threads on uniprocessor System 7 machines. This is something that wasn’t available with the old cooperatively scheduled PowerPC threads package. However, the SMP architecture with tightly coupled processors sharing the same system bus will hinder applications that are bottlenecked on memory access.

YARC Systems has a solution for this with NuBus- and PCI-based accelerator cards that have onboard PowerPC processors and fast local RAM. If your application is extremely CPU intensive and you have access to a network of Macintoshes, you will also want to look at PowerTap, a software package from Emerson Kennedy that allows an application to tap into networked CPU resources. While YARC and PowerTap won’t accelerate applications written to the Apple MP API, both vendors plan to internally leverage off of the Apple MP API in order to take advantage of multiprocessing on the host machine.

The three main vendors of Macintosh MP products have supplied sections better describing their products. Each section contains an overview, a sample fractal algorithm coded to the vendor API, and a short section on the cost of the product.

DayStar Digital

Overview

DayStar’s new MP systems are standard Macintoshes, with one major exception: they contain more than one CPU. The Apple MP API, which was designed in conjunction with DayStar, defines a set of services that allows developers to create and communicate with multiple elements of execution called “tasks”. When tasks are run on a multiprocessor system they are scheduled and run simultaneously on all the available processors.

Task creation is accomplished by providing a pointer to a function already defined within existing application code. The most obvious advantage of this approach is that you can use existing tools and build processes to construct an MP-aware application. No special compilers or packaging of the task code are required. Tasks have complete access to all the memory in the system. If an application has retrieved and prepared data for processing it can simply tell the tasks where the data is. It is not necessary to move any data to specialized task-only memory, thus avoiding expensive transactions over system busses.

According to the Apple MP API specification the processors in an MP system must be cache-coherent. This means that the developer need not be concerned with the possibility that data stored in the cache of one processor has not yet been written to main memory. If any other processor accesses that memory, the MP hardware will automatically ensure that the value cached within the other processor is retrieved, rather than the value in main memory. The MP API’s assumption of cache-coherency makes programming significantly easier; programming non-cache-coherent systems is far more error-prone and is not for the faint of heart.

Tasks run preemptively on all systems, including those with a single processor. If an application is willing to require the presence of PowerPC hardware and the shared library that provides the MP API services, the creation of MP-aware applications can be greatly simplified. The application simply creates tasks and distributes the work accordingly. The tasks created could do all the work while the application checks for user events and controls the flow of data. The MP API is Apple system software. It will be carried forward into Copland and is in fact a subset of the Copland tasking model.

Even though tasks and applications share the same memory, it is very important that they communicate, at least initially, via one of the three communication primitives provided: message queues, semaphores and critical regions. Communicating via these primitives ensures that all former memory accesses made by the communicant are completed before the recipient starts using those locations, i.e. ensuring that shared resources are accessed atomically. Using the communication primitives also provides a method by which a task can yield time if it has to wait for something that is not yet available.

Task Communication

There are three main inter-task communication mechanisms. The first are message queues. Message queues are first-in-first-out queues of 96-bit messages. Messages are useful for telling a task what work to do and where to look for information relevant to the request being made, such as a pointer into main memory. They are also useful for indicating that a given request has been processed, and, if necessary, what the results are. Message queues incur more overhead than the other two communication primitives. If you cannot avoid frequent synchronization, at least try to use a semaphore instead of a message queue.

Semaphores store a value between 0 and some arbitrary positive integer value. The value in a semaphore can be raised and lowered, but never below 0 and never above the semaphore’s maximum value. Semaphores are useful for keeping track of how many occurrences of a particular thing are available for use. Binary semaphores, which have a maximum value of 1, are especially efficient mechanisms for indicating to some other task that something is ready. When a task or application has finished preparing data at some previously agreed-upon location, it raises the value of a binary semaphore, which the target task can be awaiting. The target task lowers the value of the semaphore, performs any necessary processing, and raises the value of a different binary semaphore to indicate that it is done with the data. This technique can be used to replace the message queue pairs described above, using the “Divide And Conquer” technique. MPCreateBinarySemaphore() is a macro that exists to simplify the creation of binary semaphores.

Critical regions are used to ensure that no more than one task (or the application) is executing a given “region” of code at any given time. For example, if part of a task’s job is to search a tree and modify it before proceeding with its primary work, then if multiple tasks were allowed to search and try to modify the tree at the same time, the tree would quickly become corrupted. An easy way to avoid the problem is to form a critical region around the tree searching and modification code. When a task tries to enter the critical region, it will be able to do so only if no other task is currently in it - thus preserving the integrity of the tree.

Cost

The cost of the DayStar Genesis system, which comes with four 604 processors and a minimum of 16MB and 1GB, will range from $10,000 to $15,000.

Sample Code

The sample code uses two queues as the communication mechanism between tasks. Each task has a receive queue for messages from the application, and the application has a global queue for messages from the tasks. When work is being done by the tasks, the front end could either block on its queue, or poll the queue and call WaitNextEvent(). When a task finishes a segment of the fractal image, it sends the results back to the front end and blocks on its queue for another segment to processes.

 err = 0
 if( !MPLibraryIsLoaded() ) /* Check that the MP library is present */
 err = 1;

    /* Check that the library is compatible with our header */
 if( (err == noErr) && !MPLibraryIsCompatible() )
 err = 1;
 
 if( err == noErr )
 numProcessors = MPProcessors();
 else
 numProcessors = 1;/* Only use the host processor */

    /* Allocate memory for each processor (each task) */
 gTaskData = (TaskData *) NewPtrClear(
 numProcessors * sizeof (TaskData));
 assert(gTaskData != NULL); /* Handle the error better than this */

    /* Allocate a queue for the main application to wait on */
 err = MPCreateQueue( &gMainAppQueue );
 assert(err == noErr);    /* Handle the error better than this */

    /* Allocate a send queue and a task for each processor */
 err = noErr;
 for( i = 0; i < numProcessors && err == noErr; i++ ) {
 err = MPCreateQueue( &gTaskData[i].taskToAppQueue);
 assert(err == noErr);    /* Handle the error better than this */
 gTaskData[i].taskToAppQueue = gMainAppQueue;
 
    /* Create a task from the function fTask() */
 err = MPCreateTask( fTask, &gTaskData[i],
 2048, NULL, NULL, NULL, 0, &gTaskData[i].taskID );
 assert(err == noErr);

 fSendMessage( gTaskData[i].appToTask, kTMCreate );

    /* We get an immediate reply to our kTMCreate message */
 fReceiveMessage( gMainAppQueue , &message );
 }
 
    /* The main application loop now posts action commands to each task */
    /* queue, then blocks on its receive queue (gMainAppQueue) until a */
    /* task has finished a segment of the image.  When all segments are */
    /* rendered, a terminate message is sent and each task quits */
    
    /* This is the task code that runs on each processor */
    /* The variable “p” was passed in at creation time to the task */
 finished = false;
 while( !finished ) {
 fReceiveMessage( p->appToTask , &message );
 switch( message ) {
 case kTMCreate:
 break;
 case kTMRun:
 main( &p->zc, &p->zd, &p->step, &p->escape,
 p->width, p->results );
 break;
 case kTMQuit:
 finished = true;
 break;
 }
 fSendMessage( gMainAppQueue , kTMReady );
 }

 return( noErr );

YARC Systems

Overview

The YARC environment uses both hardware and software in order to achieve multiprocessing. YARC offers plug-in accelerator cards for PCI and NuBus systems which contain one or two 80mhz 601 processors and onboard RAM that also runs at 80mhz. In concept, the cards may be compared to a number of independent, tightly coupled, networked machines where the network is the peripheral device bus. In the PCI implementation of the boards, this type of networked connection becomes even more powerful because of the high bandwidth of PCI.

Having live processors with fast local memory, the multiprocessing provided by the YARC environment is under full application control, without the operating system scheduling and running tasks. This offers developers a “real time” acceleration engine where CPU cycles can be closely accounted for and controlled by an application’s code. But if the full bandwidth of the processors is not used, YARC also provides a thread manager capable of running multiple threads (or tasks) on any remote processor. This multiprocessing is cooperatively (or voluntarily) scheduled, which is identical to what is implemented by the PowerPC Thread Manager on the Macintosh. The YARC multiprocessing environment therefore offers fast, guaranteed access to remote CPU horsepower, with the ability to fine-tune processor load by adding scheduled multiprocessing for any of the attached board processors.

Because the YARC system isn’t tightly coupled to the MacOS, creating “tasks” for scheduled execution involves a special development environment. This package costs $495 and is built around the GNU C compiler. YARC is working on a PEF loader which would eliminate the need for a custom development setup.

Cost

Boards start at $2,995 with one 80mhz 601 CPU and 8mb of RAM. The most powerful board is currently the two-processor HYDRA board, with 128mb of RAM. This board tops out at $13,000.

Sample Code

 #define MAXBOARDS 16
 static Board *board[MAXBOARDS];
 ...

 y_configure();  /* Initialize the environment and boards */
 if ((vfd = vio_open("AppToLoad.ppc", VO_RDONLY)) < 0)
 vioerror("AppToLoad.ppc");

 err = noErr;
 numBoards = 0;
 while((board[numBoards] = y_open(0,0)) != NULL 
 && numBoards < MAXBOARDS) {
 if ((err = yk_loadkernel(board[numBoards])) != noErr) {
 yerror(board[numBoards], 
 "Unable to load YARC PPC kernel");
 break;
 }
 if (yk_loadxcoff(board[numBoards], vfd, &info) < 0) {
 yerror(board[numBoards], 
 "Unable to load PPC code to board");
 break;
 }
 numBoards++;
 }

 vio_close(vfd);

 for(k=0; k < numBoards; k++) {
 err = yk_setargs(board[k], &info, NULL, NULL);
 err = yio_init(board[k], 0, 1, 2);/* Init stdio */
 err = ykiret(board[k]);  /* Start task code */
 }

PowerTap

Overview

PowerTap is a software library that runs on all Macintosh models. It can assign work to all processors on all Macintoshes connected by a network. PowerTap simplifies multiprocessing by performing all of the scheduling, task management and error recovery, interfacing to the host software as a simple black box where tasks are submitted and results are retrieved.

Candidate applications are those that are computationally intense and can be divided into independent pieces. PowerTap is intended for jobs that take more than a couple of seconds, although shorter jobs are practical when using attached processors. The assumption is that any job that computes for a minute or an hour must be looping in some way. Typically, it is working on each pixel/band/timeslice/piece in a similar manner. So the developer takes the contents of such an existing loop and moves that code into a DoTask() function, rather than restructuring the entire application.

To use PowerTap, a developer divides a job into multiple, independent pieces referred to as “tasks”. [PowerTap tasks are different from Apple’s notion of a MP task. PowerTap tasks refer to data, such as one tile or band of an image.] No task may depend on the results of other tasks in the same job. A host-supplied function, called DoTask(), is needed, that can perform any of the tasks, given two host-defined blocks of data. One of the blocks is the task-specific data, and the other block is common to all or most of the tasks in the job. Separating the two enables PowerTap to minimize network traffic.

To get a job done, the host software creates the separate tasks and submits them to the PowerTap library using SubmitTask(). Subsequent calls to PTIdle() cause the work to be performed on other CPU’s and/or by the local DoTask(). Task results are retrieved by calls to GetNextResult() or GetTaskResult(). Completed results and task data are available throughout the duration of the job, so there is no need to maintain queues or provide error handling for the myriad potential errors.

The basic sequence is:

InitPowerTap()

OpenJob()

SubmitTask() [once for each task]

PTIdle() and

GetNextResult() or GetTaskResult() until all results are done

CloseCurrJob()

ClosePowerTap()

The PowerTap library and DoTask() are linked into the host software. This means the host programmer does not have to code the algorithm two different ways, depending on Gestalt results - the job will be performed, regardless of the platform or environment.

Remote taps are complete, faceless, background-only (FBA) applications built from a Tap Module (provided), plus the host’s DoTask(), plus a customization resource. Users of remote machines being tapped can control their Tap with a local control panel (provided). This provides on/off control as well as an adjustment for how much or little CPU time will be given to the Tap.

Each tap has a customization resource which identifies the tap and provides settings for buffer sizes, CPU sharing and other things. There are several optional calls available for obtaining stats for the job and for individual task performance, limiting the number of participating remote Macs, and other features.

Cost

The end user has no additional costs required. PowerTap works with all Macintosh models. There can even be a relative cost savings if the end user sets up a small number of very powerful machines and uses PowerTap to enable many people to tap into the power of those “power servers”.

The developer must license one copy of PowerTap. This entitles them to unlimited distribution as part of their product with no royalties or periodic renewal fees. The present price range is $1,200 to $2,700, depending on the number of remote taps that can be used.

Sample Code

The sample fractal code is below. The DoTask() routine is not shown; however, it would consist of a routine that takes a pointer to the job data and the task data. The PowerTap libraries would be responsible for sending the task data and job data across the network to and from each tap.

 #definekNumTasks20
 ...

 err = InitPowerTap( kOnlyGuest + kUseGenesisAPI );

    // Allocate the initial request param block that gets sent to each task
 jobLen = sizeof( JobBlock );
 theJobData = (JobBlock**) NewHandleClear( jobLen );
 (**theJobData).zc = -0.75;
 (**theJobData).zd = 0;
 (**theJobData).step = 0.0001;
 (**theJobData).escape  = 50.0;
 (**theJobData).width   = 1500;

    // choose a job number that will be unique
 theJobNum = TickCount();

 err = OpenJob( theJobNum, (Handle) theJobData, jobLen );

 taskLen = sizeof( TaskBlock );

    // submit all of the tasks. they queue ~ LIFO.
    // hard-code the number of tasks as kNumTasks = 50 for the sample.
 for ( i = kNumTasks - 1; i >= 0L; i-- ) {
 taskData = (TaskBlock**) NewHandle( taskLen );
 if ( taskData != NULL )
 {
 (**taskData).startLine   = i * 1500 / kNumTasks;
 (**taskData).endLine   = (i+1) * 1500 / kNumTasks - 1;

 err = SubmitTask(i, (Handle) taskData, taskLen, NULL);
 }
 }

    // act on the task results as they come in 
 nDone = 0L;
 while ( nDone < kNumTasks )
 {
    // get all of the results that are ready now.
 while ( GetNextResult( 
 &taskNo, (Handle*) &result, &rLen, macName ) )
 {
 DrawResult( 
 taskNo, (ResultBlock**) resultHand, macName );
 nDone++;
 }

    // call PTIdle to give PowerTap some time to juggle the tasks.
 if ( PTIdle( 2L ) != noErr )
 break;

 WaitNextEvent( everyEvent, &theEvt, 2L, NULL );
 }

    // we are done now.
 ClosePowerTap();
 DisposeHandle( (Handle) theJobData );

The Pros and Cons

The Apple MP API and the SMP architecture required to support it are really going to bring multiprocessing to the masses. The SMP architecture will be even more compelling under Copland and Gershwin because those operating systems should allow much broader utilization of extra processors by any system task. On the downside, the inherent architecture of shared memory has performance implications for applications that are bottlenecked on the system bus.

PowerTap also has an interesting product that differentiates itself by its network capabilities. While this solution is going to appeal to a much smaller audience because of the necessity of a network of underutilized machines, the potential performance gains are enormous. However, the programming model is limited with respect to inter-task communication, and sending inter-task data over a network can be expensive. Also, the network “taps” can come and go, which makes using the system for real-time problem solving impossible.

YARC offers a good product as well, and the company has had years of experience with accelerator boards for the Macintosh. Their product really shines for applications that are bottlenecked on memory access, or applications that want to completely control slave CPUs for real-time applications. However, the YARC boards are limited in that they cannot stay cache-coherent with the main system CPU, which means that YARC boards currently have no way of seamlessly integrating with Copland or the Apple MP API. YARC has specialized in high-end custom applications in the past, and in my opinion, they will continue to stay in this market in the future.

Conclusions

If you think your software might take advantage of multiprocessing, then I would seriously suggest you look at the offerings described in this article. For most developers, especially mainstream developers, I think the choice is pretty clear. The Apple MP API combined with hardware from DayStar offers a viable solution today under System 7, and a clear support path with the Copland OS and beyond. YARC and PowerTap offer excellent products with superior performance in many situations; however, they are more appropriate for specialized solutions, and I don’t think they will break into the mainstream. From the customer’s point of view, an investment in an Apple MP-compatible machine is a clear investment in the future. The future of MP for Macintosh clones lies also in the SMP architecture. The CHRP hardware standard, which Copland will surely support, also defines a SMP architecture for multiprocessor machines.

Multiprocessing is about to enter the Macintosh mainstream and the price/performance implications are exciting. For Macintosh MP to really take off, though, there will have to be a resolution of the current chicken-and-egg problem. For a while, few customers will have MP-capable machines, and developers will be reluctant to spend time converting their applications without a clear market. For the customer, it will be a question of spending extra for a multiprocessor box when there aren’t that many applications that take advantage of the extra horsepower. However, this problem is already being solved by Adobe. They have a plug-in module for Photoshop that takes advantage of Apple MP systems, and their customer base is very likely to spend the money to upgrade. Maybe this is just the spark needed to get the ball rolling and make Macintosh MP a viable solution.

DayStar Digital http://www.daystar.com/expand.html

YARC http://www.yarc.com

Emerson Kennedy mailto:powertap@aol.com

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Latest Forum Discussions

See All

Whitethorn Games combines two completely...
If you have ever gone fishing then you know that it is a lesson in patience, sitting around waiting for a bite that may never come. Well, that's because you have been doing it wrong, since as Whitehorn Games now demonstrates in new release Skate... | Read more »
Call of Duty Warzone is a Waiting Simula...
It's always fun when a splashy multiplayer game comes to mobile because they are few and far between, so I was excited to see the notification about Call of Duty: Warzone Mobile (finally) launching last week and wanted to try it out. As someone who... | Read more »
Albion Online introduces some massive ne...
Sandbox Interactive has announced an upcoming update to its flagship MMORPG Albion Online, containing massive updates to its existing guild Vs guild systems. Someone clearly rewatched the Helms Deep battle in Lord of the Rings and spent the next... | Read more »
Chucklefish announces launch date of the...
Chucklefish, the indie London-based team we probably all know from developing Terraria or their stint publishing Stardew Valley, has revealed the mobile release date for roguelike deck-builder Wildfrost. Developed by Gaziter and Deadpan Games, the... | Read more »
Netmarble opens pre-registration for act...
It has been close to three years since Netmarble announced they would be adapting the smash series Solo Leveling into a video game, and at last, they have announced the opening of pre-orders for Solo Leveling: Arise. [Read more] | Read more »
PUBG Mobile celebrates sixth anniversary...
For the past six years, PUBG Mobile has been one of the most popular shooters you can play in the palm of your hand, and Krafton is celebrating this milestone and many years of ups by teaming up with hit music man JVKE to create a special song for... | Read more »
ASTRA: Knights of Veda refuse to pump th...
In perhaps the most recent example of being incredibly eager, ASTRA: Knights of Veda has dropped its second collaboration with South Korean boyband Seventeen, named so as it consists of exactly thirteen members and a video collaboration with Lee... | Read more »
Collect all your cats and caterpillars a...
If you are growing tired of trying to build a town with your phone by using it as a tiny, ineffectual shover then fear no longer, as Independent Arts Software has announced the upcoming release of Construction Simulator 4, from the critically... | Read more »
Backbone complete its lineup of 2nd Gene...
With all the ports of big AAA games that have been coming to mobile, it is becoming more convenient than ever to own a good controller, and to help with this Backbone has announced the completion of their 2nd generation product lineup with their... | Read more »
Zenless Zone Zero opens entries for its...
miHoYo, aka HoYoverse, has become such a big name in mobile gaming that it's hard to believe that arguably their flagship title, Genshin Impact, is only three and a half years old. Now, they continue the road to the next title in their world, with... | Read more »

Price Scanner via MacPrices.net

B&H has Apple’s 13-inch M2 MacBook Airs o...
B&H Photo has 13″ MacBook Airs with M2 CPUs and 256GB of storage in stock and on sale for up to $150 off Apple’s new MSRP, starting at only $849. Free 1-2 day delivery is available to most US... Read more
M2 Mac minis on sale for $100-$200 off MSRP,...
B&H Photo has Apple’s M2-powered Mac minis back in stock and on sale today for $100-$200 off MSRP. Free 1-2 day shipping is available for most US addresses: – Mac mini M2/256GB SSD: $499, save $... Read more
Mac Studios with M2 Max and M2 Ultra CPUs on...
B&H Photo has standard-configuration Mac Studios with Apple’s M2 Max & Ultra CPUs in stock today and on Easter sale for $200 off MSRP. Their prices are the lowest available for these models... Read more
Deal Alert! B&H Photo has Apple’s 14-inch...
B&H Photo has new Gray and Black 14″ M3, M3 Pro, and M3 Max MacBook Pros on sale for $200-$300 off MSRP, starting at only $1399. B&H offers free 1-2 day delivery to most US addresses: – 14″ 8... Read more
Department Of Justice Sets Sights On Apple In...
NEWS – The ball has finally dropped on the big Apple. The ball (metaphorically speaking) — an antitrust lawsuit filed in the U.S. on March 21 by the Department of Justice (DOJ) — came down following... Read more
New 13-inch M3 MacBook Air on sale for $999,...
Amazon has Apple’s new 13″ M3 MacBook Air on sale for $100 off MSRP for the first time, now just $999 shipped. Shipping is free: – 13″ MacBook Air (8GB RAM/256GB SSD/Space Gray): $999 $100 off MSRP... Read more
Amazon has Apple’s 9th-generation WiFi iPads...
Amazon has Apple’s 9th generation 10.2″ WiFi iPads on sale for $80-$100 off MSRP, starting only $249. Their prices are the lowest available for new iPads anywhere: – 10″ 64GB WiFi iPad (Space Gray or... Read more
Discounted 14-inch M3 MacBook Pros with 16GB...
Apple retailer Expercom has 14″ MacBook Pros with M3 CPUs and 16GB of standard memory discounted by up to $120 off Apple’s MSRP: – 14″ M3 MacBook Pro (16GB RAM/256GB SSD): $1691.06 $108 off MSRP – 14... Read more
Clearance 15-inch M2 MacBook Airs on sale for...
B&H Photo has Apple’s 15″ MacBook Airs with M2 CPUs (8GB RAM/256GB SSD) in stock today and on clearance sale for $999 in all four colors. Free 1-2 delivery is available to most US addresses.... Read more
Clearance 13-inch M1 MacBook Airs drop to onl...
B&H has Apple’s base 13″ M1 MacBook Air (Space Gray, Silver, & Gold) in stock and on clearance sale today for $300 off MSRP, only $699. Free 1-2 day shipping is available to most addresses in... Read more

Jobs Board

Medical Assistant - Surgical Oncology- *Apple...
Medical Assistant - Surgical Oncology- Apple Hill Location: WellSpan Medical Group, York, PA Schedule: Full Time Sign-On Bonus Eligible Remote/Hybrid Regular Apply Read more
Omnichannel Associate - *Apple* Blossom Mal...
Omnichannel Associate - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Read more
Cashier - *Apple* Blossom Mall - JCPenney (...
Cashier - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Blossom Mall Read more
Operations Associate - *Apple* Blossom Mall...
Operations Associate - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Read more
Business Analyst | *Apple* Pay - Banco Popu...
Business Analyst | Apple PayApply now " Apply now + Apply Now + Start applying with LinkedIn Start + Please wait Date:Mar 19, 2024 Location: San Juan-Cupey, PR Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.